Understanding Marketing Mix Modeling with Meta’s Robyn

Rochan Nehete
13 min readJun 5, 2023

--

Last night I was watching the movie ‘Air’ with my friend and was blown away by the amount of risk shown to be taken in the film.

It got me thinking, if I had the next Michael Jordan ready to support me on a small-scale brand, how would I go about with the marketing budget in the first place? I suddenly got a flashback of a method I encountered during my previous experience, Marketing Mix Modeling. In this article, I seek to explore an open-source library for MMM called Robyn, developed by Meta. But before that,

What is Marketing Mix Modeling(MMM)?

Marketing mix modeling (MMM) is a statistical analysis technique used to measure and evaluate the impact of various marketing variables on sales and other key performance indicators. It involves analyzing historical data to quantify the effects of marketing activities such as advertising, promotions, pricing, and distribution channels, and determining their individual and combined contributions to business outcomes. MMM helps businesses optimize their marketing strategies by identifying the most effective allocation of resources and understanding the ROI of different marketing investments. It's like putting all your marketing ingredients into a big pot and seeing what delicious dish comes out!

So your brand has decided to perform MMM and has a budget to do so, but how would they do it? There are 3 widely accepted methods you can go about it, ordered by decreasing the amount of investment required and increasing the number of resources required:

  1. 3rd Party Vendors($$$): There are 3rd party vendors that specialize in MMM such as Nielsen. They drive the entire engagement providing a high level of consultancy
  2. MMM SaaS products ($$) : There are products in the market such as Analytics Edge that can provide MMM solutions
  3. Open source solutions ($): It is possible for brands to drive their MMM efforts using an open source library such as Robyn, an open source R project developed by Meta’s Marketing Science division with comprehensive documentation.

Each of these has its own pros and cons. Whatever you choose, there are a the following common steps involved:

  1. Data Collection: Gathering relevant data on marketing activities and sales to understand the impact of different elements in the marketing mix.
  2. Data Analysis: Analyzing the collected data to identify patterns, correlations, and trends that reveal the effectiveness of marketing efforts.
  3. Variable Selection: Selecting the key variables or factors that significantly influence sales and marketing performance.
  4. Model Building: Developing a statistical model that quantifies the relationship between marketing variables and sales outcomes.
  5. Model Validation: Testing and validating the accuracy and reliability of the model to ensure its effectiveness in predicting future outcomes.
  6. Scenario Simulation: Running simulations and “what-if” scenarios to assess the potential impact of changes in marketing strategies and variables.

For my purpose, I used the simulated dataset from Robyn.

Robyn uses Facebook’s Prophet to perform time series decomposition, ridge regression in order to regularize multi-collinearity and prevent overfitting, and Facebook’s hyperparameter optimization library Nevergrad.

Data Exploration and Feature Engineering

Following is what our dataset looks like:

Robyn’s dt_simulated_weekly dataset

We would input this dataset to Robyn using robyn_inputs() method

For MMM, generally weekly data is used as daily data can tend to be noisy. A common recommendation is having at least 2–3 years of historical data, including detailed information on marketing activities, sales or revenue data, and other relevant variables such as pricing, promotions, and external factors (e.g., economic indicators). The more data available, the better the model can capture seasonality, trends, and other patterns. Robyn also supports Prophet’s holiday data and we simply need to specify the country code(eg. US, DE) to make use of it.

Our dependent variable would be revenue, but if the business demands, we can choose to have conversions as well. There can only be one dependent variable.

Paid Media Variables: Any media variables with a clear marketing spend falls into this category. This includes TV, Out of house advertising(eg. billboards), Facebook (facebook_I = impressions, facebook_S = spend), and search clicks. For exposure level metrics, we can use impressions, clicks or GRP(Gross Rating Point = Reach % * Avg. frequency of impressions).

While defining channels, it is optimum to split channels into sub-channels for better performance. For example, Facebook ad campaign in the real world can be split into fb_prospecting (to acquire people not familiar with your product) and fb_retargeting(for existing customers)

Context Variables: These include other variables that are not paid or organic media that can help explain the dependent variable. We can specify context variables(context_vars) to Robyn such as competitor sales, price and promotional events, temperature, unemployment rate, CPI or COVID. This is one of the focus points during the data collection phase.

Organic Variables: Any marketing activities without a clear marketing spend fall into this category. This would include social media posts, push notifications, and newsletter sendings with the organic_vars variable. For all types of variables within Robyn, there is support to define the relationship of the variable as default, positive and negative. We can also force context or organic variables to be categorical using the factor_vars variable.

Defining a window: A window definition gives us the option to select a modeling window that would best describe our business. This is especially helpful as businesses keep changing over time. For example, an ad channel such as Instagram may have been profitable previously but it is not saturated due to competition.

Adstock

Adstock is a concept used in marketing and advertising analytics to capture the lingering effect or carryover impact of past advertising exposures on consumer behavior, specifically diminishing returns (decreasing marginal returns after a certain saturation point in spending). In Robyn, we can choose between 3 types of adstocks:

1. Geometric — Fixed decay rate over time, which only requires 1 parameter. This will run the model faster than the others.

2. Weibull CDF — Flexible decay rate with 2 parameters to control the S-shape of the decay and inflection point of the decay curve.

3. Weibull PDF — Flexible decay rate with the same 2 parameters, the only difference is that PDF offers a lagged effect.

I chose geometric for my model. Depending upon the choice of adstock chosen, we can specify ranges for hyperparameters of our model.

Saturation

Robyn utilizes the Hill function to reflect the saturation of each media channel. A Hill function is a two-parametric function in Robyn with alpha and gamma:

  • Alpha controls the shape of the curve between exponential and s-shape. We recommend a bound of c(0.5, 3) — note that the larger the alpha, the more S-shape and the smaller the alpha, the more C-shape.
  • Gamma controls the inflexion point. We recommend a bound of c(0.3, 1) — note that the larger the gamma, the later the inflection point in the response curve.

Prophet Seasonality Decomposition

Prophet is a Meta open source code for forecasting time series data. Prophet has automatically been included in the Robyn code to decompose the data into trend, seasonality, holiday, and weekday impacts, in order to improve the model fit and ability to forecast. Following is how the time series decomposition from Prophet within Robyn looks like:

Time series Decomposition for Prophet

Overall this is how our InputCollect variable for robyn_inputs would look like:

InputCollect <- robyn_inputs(
dt_input = dt_simulated_weekly,
dt_holidays = dt_prophet_holidays,
date_var = "DATE", # date format must be "2020-01-01"
dep_var = "revenue", # there should be only one dependent variable
dep_var_type = "revenue", # "revenue" (ROI) or "conversion" (CPA)
prophet_vars = c("trend", "season", "holiday"), # "trend","season", "weekday" & "holiday"
prophet_country = "DE", # input one country. dt_prophet_holidays includes 59 countries by default
context_vars = c("competitor_sales_B", "events"), # e.g. competitors, discount, unemployment etc
paid_media_spends = c("tv_S", "ooh_S", "print_S", "facebook_S", "search_S"), # mandatory input
paid_media_vars = c("tv_S", "ooh_S", "print_S", "facebook_I", "search_clicks_P"), # mandatory.
organic_vars = "newsletter", # marketing activity without media spend
factor_vars = c("events"), # force variables in context_vars or organic_vars to be categorical
window_start = "2016-01-01",
window_end = "2018-12-31",
adstock = "geometric" # geometric, weibull_cdf or weibull_pdf.
)

Modeling Techniques ​

MMM uses regression, which aims to derive an equation that describes the dependent variable with high explainability. Although, this can also be replaced by other modeling techniques having lesser explainability(which can be overcome by using SHAP) covered in this article.

While using regression, overfitting, and multicollinearity are commonly addressed issues in regression analysis. We can use VIF to detect multicollinearity during our EDA

VIF is seen to be below 5 for independent variables, hence no multicollinearity

But we can address these issues together using regularization techniques. Robyn uses ridge regression, which shrinks coefficients toward zero if those variables have a minor contribution to the dependent variable.

Model Selection

Robyn uses Nevergrad, an optimization library developed by Facebook for hyperparameter tuning. While selecting the optimal model, it looks at 2 main evaluation criteria over iterations which can be obtained by the pareto-front chart:

  1. NRMSE: Normalized Root mean squared error tells us how many times a model went wrong. Root mean square error can be used to compare different models. However, RMSE doesn’t perform well if comparing models fits for different response variables or if the response variable is standardized, log-transformed, or otherwise modified. To overcome these issues, the NRMSE is used instead.
  2. DECOMP.RSSD: The Decomposition Root Sum of Squared Distance is also referred to as the business error. It tells us whether the model is going unrealistic. For eg. If I put 80% of my budget in TikTok and Robyn would assign the impact of TikTok as 0%, it doesn’t make sense. This is the model going unrealistic
Pareto Front graph for model Selection by Nevergrad
OutputModels <- robyn_run(
InputCollect = InputCollect, # feed in all model specification
cores = NULL, # NULL defaults to (max available - 1)
iterations = 2000, # 2000 recommended for the dummy dataset with no calibration
trials = 5, # 5 recommended for the dummy dataset
ts_validation = TRUE, # 3-way-split time series for NRMSE validation.
add_penalty_factor = FALSE # Experimental feature. Use with caution.
)
print(OutputModels)

Model Calibration

Calibration tells us how confident a model is with its predictions. It is a highly recommended step in Marketing Mix Modeling given a prior incrementality test has been performed before releasing the marketing budget.

It is possible to obtain ground truth to calibrate our model by testing out advertising operations on a subset consisting of 2 groups: Control and Test. The control group is not exposed to any ads and revenue/conversions obtained from them would be a natural baseline, whereas the Test group would be subjected to advertisements and the corresponding revenue/conversions would be obtained. Using these two outputs, we can calculate the corresponding lift metrics. The control and test groups can be divided as people-based, geographically, or on the basis of different devices. The longer the test study, the more the long tail effect can be captured.

Robyn can take in the calibration data as input before its modeling, which enables Nevergrad to use MAPE(Mean absolute percentage error) in addition to the two evaluation metrics.

calibration_input <- data.frame(
# channel name must in paid_media_vars
channel = c("facebook_S", "tv_S", "facebook_S+search_S", "newsletter"),
# liftStartDate must be within input data range
liftStartDate = as.Date(c("2018-05-01", "2018-04-03", "2018-07-01", "2017-12-01")),
# liftEndDate must be within input data range
liftEndDate = as.Date(c("2018-06-10", "2018-06-03", "2018-07-20", "2017-12-31")),
# Provided value must be tested on same campaign level in model and same metric as dep_var_type
liftAbs = c(400000, 300000, 700000, 200),
# Spend within experiment: should match within a 10% error your spend on date range for each channel from dt_input
spend = c(421000, 7100, 350000, 0),
# Confidence: if frequentist experiment, you may use 1 - pvalue
confidence = c(0.85, 0.8, 0.99, 0.95),
# KPI measured: must match your dep_var
metric = c("revenue", "revenue", "revenue", "revenue"),
# Either "immediate" or "total". For experimental inputs like Facebook Lift, "immediate" is recommended.
calibration_scope = c("immediate", "immediate", "immediate", "immediate")
)
InputCollect <- robyn_inputs(InputCollect = InputCollect, calibration_input = calibration_input)

Interpreting Modeling Results

Our modeling approach gives us the best models obtained from the lower left part of our pareto-front chart. We can analyze the performance of each model along effects of variables and combined contribution towards business outcomes. Robyn gives us different visualizations for each model with a one-pager for a model to compare.

OutputCollect <- robyn_outputs(
InputCollect, OutputModels,
pareto_fronts = "auto", # automatically pick how many pareto-fronts to fill min_candidates (100)
# min_candidates = 100, # top pareto models for clustering. Default to 100
# calibration_constraint = 0.1, # range c(0.01, 0.1) & default at 0.1
csv_out = "pareto", # "pareto", "all", or NULL (for none)
clusters = TRUE, # Set to TRUE to cluster similar models by ROAS. See ?robyn_clusters
export = create_files, # this will create files locally
plot_folder = robyn_directory, # path for plots exports and files creation
plot_pareto = create_files # Set to FALSE to deactivate plotting and saving model one-pagers
)
print(OutputCollect)

One Pager report for each model outputted by robyn_onepagers

myOnePager <- robyn_onepagers(InputCollect, OutputCollect, select_model, export = FALSE)

On the top, you can see the R-squared(R2) value along with NRMSE, DECOMP: RSSD, and MAPE. The R-squared value tells us how much of the variance of the dependent variable is explained by the independent variables in regression, typically referred to as the goodness of fit of the model to the observed data.

The waterfall chart tells us how much % of the revenue is attributed to each channel. So for example, the competitor's sales attributed to 49.3% of revenue, which means as the competitor is selling more, it is triggering conversions or revenue on our brand as well. This can indicate that your target audience is actively comparing and evaluating options from different competitors before making a purchasing decision.

The actual vs predicted response shows how good was our model when it came to predicting revenue over time.

The Total Spend% vs Effect% horizontal bar graph shows us how much we ended up spending on the channel and what was the effectivity of each channel

The response curves and mean spends by channel, sometimes called saturation curves, indicate if a specific media channel’s spend is at an optimal level or if it is approaching saturation and therefore suggest potential budget reallocation.

The adstock decay chart represents, on average, the percentage decay rate each channel had. The higher the decay rate, the more time it would take for a specific media channel to have an effect after the initial exposure.

The fitted vs residual chart helps us to identify outlier points. These can help us identify any variables we may have left out in our analysis(for example COVID)

After considering different tradeoffs and reconsidering variables we might have left off, we select a model which meets our business needs on the basis of our analysis, for budget allocation.

Budget Allocation

The most crucial part of decision-making involves considering what-if situations for a budget. In this section, we take a look at how we can answer different questions which can be asked about our modeling results by decision-makers.

For every selected model result, the robyn_allocator() function can be applied to get the optimal budget mix that maximizes the response(revenue or conversions). Now for the fun part

Scenario 1: What is the maximum return I can obtain given a certain spend?

To answer this, we use the Robyn allocator as follows

AllocatorCollect <- robyn_allocator(
InputCollect = InputCollect,
OutputCollect = OutputCollect,
select_model = select_model, # Our selected model
# date_range = "last_10", used to specify across what period the question is asked. Last 10 periods, same as c("2018-10-22", "2018-12-31"). Default = latest month
total_budget = 5000000, # Total budget for date_range period simulation
channel_constr_low = c(0.8, 0.7, 0.7, 0.7, 0.7), # Minimum spend on a channel
channel_constr_up = c(1.2, 1.5, 1.5, 1.5, 1.5), # Maximum spend on a channel
channel_constr_multiplier = 3, # Customise bound extension for wider insights
scenario = "max_response", # Need to maximize the response variable, revenue in our case
export = create_files
)
Budget allocation for max ROI, fixed budget (Refer blue charts)

We can see that by keeping the total spending the same, we can increase the ROAS by 22.7% by reallocating our budget per channel as per the blue table. The response curve shows us how the response would look like over the amount of spend we give, and shows us our initial points and proposed points.

Scenario 2: How much do I have to spend to hit a target ROAS/CPA of x?

To answer this, we use the make a small change in the scenario parameter of Robyn allocator and add the target as follows:

AllocatorCollect <- robyn_allocator(
InputCollect = InputCollect,
OutputCollect = OutputCollect,
select_model = select_model,
# date_range = NULL, # Default last month as initial period
scenario = "target_efficiency",
target_value = 2, # Customize target ROAS or CPA value according to business needs
export = create_files
)
Budget allocations for target ROAS of 2 (Refer blue charts)

We observe that in order to achieve a return on ad spend of 2, we need to decrease spending by 24.6%, reallocate the budget per channel as shown in the blue table and settle for a reduced revenue by 19.1%.

You can also obtain the marginal responses for each additional $ spent on a channel by using robyn_response with metric value = Spend + $

All codes to reproduce content within this article, along with charts and visualization reports, can be found on my GitHub repository here.

--

--

Rochan Nehete
Rochan Nehete

Written by Rochan Nehete

Meet Rochan, a Business Analytics student eager to learn new things in data. I'm passionate about solving complex problems and expanding knowledge.