tl;dr: The Monthly Recurring Revenue (MRR) model is the most popular revenue model currently being employed by SaaS and subscriptions, which include everything from Apple music to Lootcrate. In this model, a customer's value is defined over his subscribed lifetime, not his individual transactions, and this post formalizes a predictive framework for modeling this variable behavior using a living business plan.


 

I know most of you were expecting a Bitcoin-related update, but I do love tackling new stuff. I also know it has been a while since I last posted, but I've been busy graduating and such. Anyway, this post is about MRR, SaaS, and R, LaTeX, and the power of free time. Click here to skip directly to the end result.

 

 

Introduction

At the start of this project, I wasn't even fully aware of what R and LaTeX were, and by the end I had created a single script that used both to run a Monte Carlo simulation, redraw new graphs that change with each state of stochastically simulated data, and printing it all out in pretty pdfs with adaptable formatting. 

I learnt to simulate Software as a Service (SaaS) business model and the Monthly Recurring Revenue (MRR) model, improved my odds of getting a job doing this kind of work, and bringing more rigorous data-centric techniques to marketing analysis than I had learnt at business school. I did not get the job I was working on this project for, decided that it was time for a change, and eventually moved to Toronto to test my luck there. 

This project was the most successful failure that I’ve had so far.

I succeeded in building my living business plan and learning a new model, but unfortunately, I failed to get a job at the primary company I wanted to impress. This is how it happened, and how it helped inform my move to Toronto. 

Goal

  1. To learn and simulate the Monthly Recurring Revenue (MRR) model using Monte Carlo.

  2. Demonstrate capability to learn new technologies, business frameworks, and hopefully, impress Hootsuite. 

I decided to do this by attempting to build a report that could simulate hypothetical business situations for a theoretical SaaS company, given predetermined parameters.

A reproducible SaaS business plan, complete with Cohort-based analysis using the MRR formula family. Since I set out to generate a reproducible document, I was capable of generating 3 different versions of this paper, each with their own dataset, graphs, and values. Each deals with a different scenario: 'Best', 'Neutral', and 'Worst', whose behavior is described in the captions.

 

Methodology

To accomplish this, I wrote a model  in R and LaTeX. 

All the source code is on Github. I have reproduced the primary script for the project below, which also happens to be the first piece of R code I've ever written.

Yes, I know it's not the best code, but I had to keep it, even just for posterity's sake. It represents my transition from no experience in two languages (R and LaTeX) to knowing how to write something without it blowing up (after a lot of help from swirl and Quora).

It was proof (to me) that not everything had to be learnt in a classroom, and that the internet really is an amazing place if you run into headfirst enough times. 

# TaaS 
# MRR Calculator 

# Packages 
# --------------------- 

require(reshape)
require(ggplot2)

# Variables 
# --------------------- 

# Input
MRR <- 30 # Monthly subscription of $30 
AvgDuration <- mean(1:2) # Average Contract Duration between 1 and 2 months 
ARPA <- MRR * AvgDuration # Average Monthly Recurring Revenue per Account
Months <- 12 #Months of simulation 
Customers <- 60 #Initial influx of Customers (Month 1)
Margin <- 0.45 #Average sale margin
R <- 0.10 # Discount/Interest rate
Ad_Budget <- 200 # Amount spent acquiring customers monthly

# Customer Churn and Growth 

Churn <- rnorm(1:Months, mean = 0.10, sd = 0.05) # Expected churn rate mean and deviation (%)
Growth <- abs(rnorm(1:Months, mean = 0.10, sd = 0.05)) # Expected growth rate mean and deviation (%)

# Revenue Churn and Growth 

Revenue_Churn <- rnorm(1:Months, mean = 0.00, sd = 0.001) # Expected Revenue churn rate mean and deviation (%) 
Revenue_Growth <- abs(rnorm(1:Months, mean = 0.00, sd = 0.001)) # Expected Revenue growth rate mean and deviation (%)

# Generator
# --------------------- 

# Initial Vectors 

Churned_Customers <- 0
New_Customers <- 0
Net_Customers <- 0

Total_MRR <- MRR * Customers
Churned_MRR <- 0
Expanded_MRR <- 0
Net_MRR <- 0
Moving_MRR <- MRR

Time <- c(1:Months)
Customer_Lifetime <- 1/mean(Churn)

LTV <- (ARPA[1] * Margin * Customer_Lifetime)/((1+R/(Customer_Lifetime))^(Customer_Lifetime))
CAC <- Ad_Budget/(MRR[1] * Margin) 
Recover_CAC <- CAC/(mean(MRR) * Margin)


for (i in 1:Months) {
  # Customers
  Churned_Customers[i] <- Customers[i]*Churn[i]
  New_Customers[i] <- Customers[i]*Growth[i]
  Net_Customers[i] <- -Churned_Customers[i] + New_Customers[i]
  Customers[i+1] <- Customers[i] + Net_Customers[i]

  # MRR
  Total_MRR[i] = MRR * Customers[i]
  Churned_MRR <- Revenue_Churn * MRR
  Expanded_MRR <- Revenue_Growth * MRR
  Net_MRR <- -Churned_MRR + Expanded_MRR
  Moving_MRR[i + 1] <- MRR + Net_MRR[i]

  # ARPA
  ARPA[i+1] <- ((MRR*AvgDuration*Customers[i]) + (MRR*AvgDuration*Net_Customers[i]))/(Customers[i+1])

  #LTV
  LTV[i] <- (ARPA[i] * Margin * Customer_Lifetime)/((1+R/(Customer_Lifetime))^(Customer_Lifetime))

  #CAC
  CAC[1:12] <- Ad_Budget/mean(New_Customers)
  LTV_CAC <- LTV[1:12]/CAC[1:12]
  Recover_CAC[i] <- CAC[i]/(mean(MRR) * Margin)
}


#Cohort Tables
# --------------------- 

#Customers
Customer_Matrix <- matrix(0:0,nrow=Months,ncol=Months)
colnames(Customer_Matrix) <- c("January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December")
Customer_Matrix[1,1] <- Customers[1]

for (i in 1:(Months-1)) {
  Customer_Matrix[i + 1, i + 1] <- New_Customers[i+1]
  Customer_Matrix[,i+1] <- Customer_Matrix[,i] * (1 - Churn[i])
  Customer_Matrix[i + 1, i + 1] <- New_Customers[i+1]
}

# Add new first column called Cohort 


#MRR
MRR_Matrix <- matrix(0:0,nrow=Months,ncol=(Months))
colnames(MRR_Matrix) <- c("January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December")
rownames(MRR_Matrix) <- c(1:12)
MRR_Matrix[1,1] <- Customers[1] * MRR

for (i in 1:(Months-1)) {
  MRR_Matrix[,i + 1] <- Customer_Matrix[,i + 1] * MRR
}

print(MRR_Matrix)


# Storage
MRR_Data <- data.frame(1:12, Customers[1:12], Churned_Customers[1:12], New_Customers[1:12], Net_Customers[1:12], MRR[1:12], Total_MRR[1:12], ARPA[1:12], LTV[1:12], CAC[1:12], Recover_CAC[1:12], LTV_CAC[1:12],  
                       row.names = c("January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December"))
names(MRR_Data) <- c("Months","Customers", "Churned Customers", "New Customers", "Net Customers", "MRR", "Total MRR", "ARPA", "LTV", "CAC", "Recover CAC" , "LTV/CAC")

write.csv(MRR_Data, file="MRR_Data_Test.csv")

# References
# --------------------- 

# mean(MRR_Data$LTV)

# Plots 
# --------------------- 

library(ggplot2)

Customer_Frame <- data.frame(round(Customer_Matrix))
colnames(Customer_Frame) <- c(1:Months)
Customer_Frame["Cohort"] <- 1:12
Customer_Frame_M <- melt(Customer_Frame, id="Cohort")
colnames(Customer_Frame_M) <- c("Cohort","Month","Customers")

c1 <- ggplot(Customer_Frame_M, aes(x = Month, y = Customers, fill = Cohort))

c2 <- ggplot(Customer_Frame_M, aes(x = Month, y = Customers))

c1 + geom_bar(stat="identity") 

c2 + geom_bar(stat="identity") + facet_wrap(~ Cohort)

c2 + geom_freqpoly(aes(group = Cohort, colour = Cohort), stat="identity")
# TaaS 
# MRR Calculator 

# Packages 
# --------------------- 

require(reshape)
require(ggplot2)

# Variables 
# --------------------- 

# Input
MRR <- 30 # Monthly subscription of $30 
AvgDuration <- mean(1:2) # Average Contract Duration between 1 and 2 months 
ARPA <- MRR * AvgDuration # Average Monthly Recurring Revenue per Account
Months <- 12 #Months of simulation 
Customers <- 60 #Initial influx of Customers (Month 1)
Margin <- 0.45 #Average sale margin
R <- 0.10 # Discount/Interest rate
Ad_Budget <- 200 # Amount spent acquiring customers monthly
  
# Customer Churn and Growth 

Churn <- rnorm(1:Months, mean = 0.10, sd = 0.05) # Expected churn rate mean and deviation (%)
Growth <- abs(rnorm(1:Months, mean = 0.10, sd = 0.05)) # Expected growth rate mean and deviation (%)

# Revenue Churn and Growth 

Revenue_Churn <- rnorm(1:Months, mean = 0.00, sd = 0.001) # Expected Revenue churn rate mean and deviation (%) 
Revenue_Growth <- abs(rnorm(1:Months, mean = 0.00, sd = 0.001)) # Expected Revenue growth rate mean and deviation (%)

# Generator
# --------------------- 

# Initial Vectors 

Churned_Customers <- 0
New_Customers <- 0
Net_Customers <- 0

Total_MRR <- MRR * Customers
Churned_MRR <- 0
Expanded_MRR <- 0
Net_MRR <- 0
Moving_MRR <- MRR

Time <- c(1:Months)
Customer_Lifetime <- 1/mean(Churn)

LTV <- (ARPA[1] * Margin * Customer_Lifetime)/((1+R/(Customer_Lifetime))^(Customer_Lifetime))
CAC <- Ad_Budget/(MRR[1] * Margin) 
Recover_CAC <- CAC/(mean(MRR) * Margin)


for (i in 1:Months) {
  # Customers
  Churned_Customers[i] <- Customers[i]*Churn[i]
  New_Customers[i] <- Customers[i]*Growth[i]
  Net_Customers[i] <- -Churned_Customers[i] + New_Customers[i]
  Customers[i+1] <- Customers[i] + Net_Customers[i]
  
  # MRR
  Total_MRR[i] = MRR * Customers[i]
  Churned_MRR <- Revenue_Churn * MRR
  Expanded_MRR <- Revenue_Growth * MRR
  Net_MRR <- -Churned_MRR + Expanded_MRR
  Moving_MRR[i + 1] <- MRR + Net_MRR[i]
  
  # ARPA
  ARPA[i+1] <- ((MRR*AvgDuration*Customers[i]) + (MRR*AvgDuration*Net_Customers[i]))/(Customers[i+1])

  #LTV
  LTV[i] <- (ARPA[i] * Margin * Customer_Lifetime)/((1+R/(Customer_Lifetime))^(Customer_Lifetime))
  
  #CAC
  CAC[1:12] <- Ad_Budget/mean(New_Customers)
  LTV_CAC <- LTV[1:12]/CAC[1:12]
  Recover_CAC[i] <- CAC[i]/(mean(MRR) * Margin)
}


#Cohort Tables
# --------------------- 

#Customers
Customer_Matrix <- matrix(0:0,nrow=Months,ncol=Months)
colnames(Customer_Matrix) <- c("January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December")
Customer_Matrix[1,1] <- Customers[1]

for (i in 1:(Months-1)) {
  Customer_Matrix[i + 1, i + 1] <- New_Customers[i+1]
  Customer_Matrix[,i+1] <- Customer_Matrix[,i] * (1 - Churn[i])
  Customer_Matrix[i + 1, i + 1] <- New_Customers[i+1]
}

# Add new first column called Cohort 


#MRR
MRR_Matrix <- matrix(0:0,nrow=Months,ncol=(Months))
colnames(MRR_Matrix) <- c("January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December")
rownames(MRR_Matrix) <- c(1:12)
MRR_Matrix[1,1] <- Customers[1] * MRR

for (i in 1:(Months-1)) {
  MRR_Matrix[,i + 1] <- Customer_Matrix[,i + 1] * MRR
}

print(MRR_Matrix)


# Storage
MRR_Data <- data.frame(1:12, Customers[1:12], Churned_Customers[1:12], New_Customers[1:12], Net_Customers[1:12], MRR[1:12], Total_MRR[1:12], ARPA[1:12], LTV[1:12], CAC[1:12], Recover_CAC[1:12], LTV_CAC[1:12],  
                       row.names = c("January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December"))
names(MRR_Data) <- c("Months","Customers", "Churned Customers", "New Customers", "Net Customers", "MRR", "Total MRR", "ARPA", "LTV", "CAC", "Recover CAC" , "LTV/CAC")

write.csv(MRR_Data, file="MRR_Data_Test.csv")

# References
# --------------------- 

# mean(MRR_Data$LTV)

# Plots 
# --------------------- 

library(ggplot2)

Customer_Frame <- data.frame(round(Customer_Matrix))
colnames(Customer_Frame) <- c(1:Months)
Customer_Frame["Cohort"] <- 1:12
Customer_Frame_M <- melt(Customer_Frame, id="Cohort")
colnames(Customer_Frame_M) <- c("Cohort","Month","Customers")

c1 <- ggplot(Customer_Frame_M, aes(x = Month, y = Customers, fill = Cohort))

c2 <- ggplot(Customer_Frame_M, aes(x = Month, y = Customers))

c1 + geom_bar(stat="identity") 

c2 + geom_bar(stat="identity") + facet_wrap(~ Cohort)

c2 + geom_freqpoly(aes(group = Cohort, colour = Cohort), stat="identity")

These script took in variables and spit out data for that scenario:

Wrapped it up in a living a PDF built out of LaTeX (A typesetting program that is used for academic papers and my resume):

\documentclass[11pt]{article}

\usepackage{microtype}
\usepackage{rotating,booktabs}

\title{{Ties as a Service}}

% ... Truncated for space

\subsubsection*{Customers}

The Customers ($C_{1}, C_{2}, \ldots, C_{n}$) are the basis for the MRR model, and is 
usually the key measure of any SaaS business. It changes as a result of New Customers acquired over time 
($NC_1, NC_2,\ldots, NC_n$) and Churned Customers over time ($CC_{1}, CC_{2}, \ldots, CC_{n}$).
Therefore, it can be related to the Customer Churn factor, ($c_{i}$) and the Customer Growth factor, ($g_{i}$) in the following way:

\[
C_{i+1} = C_{i} + NC_{i+1} - CC_{i+1}
\]
\[
C_{i+1} = C_{i}(1 + g_{i} - c_{i})
\]

% ... 

Rendered families of charts for the simulated cohorts:

Wrapped up in end-user facing pdfs:

Results

I rendered three of these reports for 3 cases: 'Best', 'Neutral', and 'Worst', all with different starting parameters for their simulation.

My final Rstudio instance. 

My final Rstudio instance. 

There are versions available in the Github repo, and below. They include:

  • A formalization of the Monthly Recurring Revenue (MRR) function family.

  • An application of the Monte Carlo Method using roughly 5000 Independent and identically distributed (i.i.d.) 'customers'.

  •  Cohort-based Analysis to test MRR function sensitivity, specifically to Churn and Growth rates.


Low Churn (2%) , High Growth (7%), Low Volatility (2%), $30 Subscription, 45% Margin, 60 Initial Customers

Low Churn (2%) , High Growth (7%), Low Volatility (2%), $30 Subscription, 45% Margin, 60 Initial Customers

Low Churn (6%) , High Growth (5%), Low Volatility (2%), $25 Subscription, 45% Margin, 50 Initial Customers

Low Churn (6%) , High Growth (5%), Low Volatility (2%), $25 Subscription, 45% Margin, 50 Initial Customers

High Churn (6%), Low Growth (3%), High Volatility (7%), $20 Subscription, 40% Margin, 40 Initial Customers

High Churn (6%), Low Growth (3%), High Volatility (7%), $20 Subscription, 40% Margin, 40 Initial Customers


 

 

Context

This project took 4 months, from no knowledge to the final product.

This includes learning the concepts, coding and debugging scripts, passing final exams, and partying before, during, and after graduation.

I graduated with double specialization in Finance and Marketing and a minor in Economics from the Sauder School of Business at UBC on May 28th, 2014 while I was working on this project.

I've recorded how this project for Future Me, in what I can only describe as a very detailed timeline. This is literally just a description of the process for my own notes, and all views and opinions herein are completely my own. Seriously, it's all just boring historical stuff down there I wrote for me, and I'd just scroll past it if I was you.

  • February 2014: I meet Mr. Underell from Hootsuite, through Elaiza, a friend of mine who went to high school with me in Ghana (and also UBC). I wanted to learn more about the local tech industry, and what the best practices were, so we ended up talking about a variety of topics. One of the ones that came up was the MRR model of SaaS firms, as described by David Kwok, a model I had no experience with at the time, but reminded me a lot of NPV analysis.

  • March 2014: In an effort to understand Mr. Kwok's work, I made a series of flashcards on Quizlet  get better acquainted with the terminology. It isn't finished, but it gave me enough of a working knowledge to grasp the model, and, more importantly, see what I needed to simulate it. I decided at this point that I was going to build my ship in a bottle, and use it to prove how well I understood, but I still wasn't sure how I wanted to do it. I also fail a phone interview with Hootsuite for the first time, but for a position I was definitely not a good fit for
  • April 2014: Mr. Underell introduced me Mr. Zahid,  who was also willing to discuss various industry practices and the MRR Model. He directed me to yet more resources, such as Jerky as a Service and was interested in seeing what I could do with it. I decide my model will be a variant of this idea; My friend Scott gives me the name 'Ties as a Service'. I also meet Dustin Johnson, a classmate of mine at Sauder who introduces me to R for the first time. At the time, it did not seem like a significant deviation from Matlab, but I see now that its open source nature, huge package support, and interesting syntax offer it a unique flexibility. He also acquaints me with LaTeX, which I had used previously, but never to this effect. I decide to code this project in R, in order to kill two birds with one stone and learn a language and a concept simultaneously. I also have my last final exams, which is has a deceiving finality to it; I don't think that'll be the last time I'll have to pass an exam, but it does feel nice to seem done.
  • May 2014: I finished learning the basics of R (swirl() was especially helpful), and begin work coding and formatting the official paper in LaTeX. I also become fascinated by the idea of 'Reproducible Research', which is an idea from the sciences that allows one to reproduce a work using as few dependencies as possible. My next goal was to apply this idea to the project, and go even further, and create a single 'living' report that I could generate on the fly as necessary, and adapt to different conditions; a reproducible business report. I also graduated and proceed to celebrate thoroughly for a few blurry weeks.
  • June 2014: The project was finally reasonably complete; after a series of drafts and numerous syntax errors. I compiled the three final cases over 2 hours. Confident that I now grasped this idea, I applied to the Online Monetization Strategist at Hootsuite, a position that required 3 years of experience, assuming that now that I could handle this model, I could learn whatever else they needed me to learn. In retrospect, I see now that I depended too hard on such a risky bet, but I've also learnt so much more risking things than taking no risks at all. Besides, I was taught how to treat risk and reward as the same thing.
  • July 2014: After some coaching from Mr. Underell, with whom none of this would have even been possible, I make it through the phone interview this time. My friend Kevin gives me a ride to an in-person interview with Ms. Freman and Ms. Davidson. I didn't think I'd be so nervous. The project was briefly acknowledged, but it was difficult to explain its applications. As we moved on, I realized I still had a lot to learn about what tools were actually used in the field (like how to use A/B testing campaigns to find what features paying customers would prefer), and that I was still very 'green'. I learnt more during this interview process than I had learnt during any interview process, but the real lesson was that even though I've graduated, I still had much, much more to learn. Proving you can learn is one thing, but asking someone to trust you to learn everything you need to do a job is entirely another. I have yet to find a viable solution to the problem of getting experience without having experience, but at least I know what kind of experience I need now.
  • August 2014: I made it to level 2 of a 4 level interviewing process, but I wasn't asked to move level 3, and I understand; frankly, if I were them, I wouldn't have hired the guy in that interview either. I decided to update the blog post and record my results, in an effort to learn from this adventure. With my recently acquired knowledge, I hopefully won’t be this naive again in the future. 
 

Thoughts

Despite my failure, I do not regret this project. On the contrary, this project is one of the most fun and creative things I've ever done.

This period of post-graduation has definitely been one of the most amusing times in my life so far. College is an interesting time, and as it draws to close, this little project is as much a part of my adventures as anything else I've done during this summer, and it couldn't have been possible without the inspiration of many people, and the examples and work of many more.

Yes, it was unfortunate that my little plan involving Hootsuite didn't pan out, but I'm starting to think life just isn't meant to be linear. I've learnt so much more making mistakes than I would have just being handed the job I thought I wanted, and to be perfectly honest, I'm still not sure I would've been a good fit for it.

I was just so sure my work was enough proof that I was worth hiring, I just wasn't practical about what I was actually being hired for and why.

That somehow building this ship in a bottle proved that I would've been good at everything else because I learn quickly. Such an assumption is simply not viable when real money and real professionals are involved.

This project did prove I am capable of learning new ideas and technologies very quickly when I need to, and capable of producing complex work using a combination of assumptions and examples refined over time. However, it also proved that a solution without a problem is not a solution at all, but just an exercise in design and decoration. In my desire to program something elegant, the businessman in me slipped, and I lost sight of its purpose.

A solution without a problem isn’t really a solution at all.

I realize now that no matter what I build, I must never lose sight of the what makes a work truly useful; applicability.

 

Conclusions

So, why did I do all this?  Two reasons: 

  1. To see if I could.

  2. See #1.

I like learning new stuff, and I like challenging myself.

I may have failed at my secondary goal, but now I'm less afraid of failure, and I learnt a lot more trying than I would've if I didn't. 

I had this poster on my wall. Thanks Amazon.

I had this poster on my wall. Thanks Amazon.

Now that I've graduated, I may not have time to keep doing this sort of work. Finding a position that allows you to work with math and be creative is easier said than done, especially when you're just starting out. People have a fondness for the traditional, and if my experience at Hootsuite (or lack thereof) has taught me anything, it's that I'm still learning every day.

In any case, I did this project is now in my portfolio, and I proved to myself that I am capable of generating these sort of models in a myriad of environments for a variety of purposes. It's even opened the door to a little freelancing, which is an option I hadn't even considered initially. Unfortunately, I can't publish what I'm working on for clients, but hopefully in the future, I'll get a chance to work on another big personal project again. Perhaps the next one will have more immediate usefulness.

I think I've got the urge to give Machine Learning a closer look. Maybe give cryptocurrencies another look; there's some interesting stuff developing in the Ethereum world. Maybe something else entirely different. Only Future Me knows.

All I know is I can't wait to get started on the next big thing.

For the next little bit though, I'm going to enjoy this last 'college' summer, party with the people I probably won't see again for a while, and see what else I can do. Oh, and get a job and start doing grown up things.

Eventually.

brb.