Many years have passed after the idea of “Minimum Viable Product” started to emerge, and it became over time a very well defined framework from a product management point of view.
There’s a consensus on what MVP means, that is built upon a decades-long experience of the industry. The result is plenty of content and resources that explain how to define the specifics of a new product that has yet to be launched
What I believe hasn’t been discussed enough, though, is the translation of the MVP framework into its technical implementation: What mental modes should CTOs use after the product specifics have been laid down? Which tech choices should you make if you’re Chief Technology at your first startup experience?
MVP: what and why?
Let’s start by recapping the minimum viable product framework, and what it has to do with launching startups. This is important because we can have a solid common ground.
The underlying premise is that investors (typically) don’t invest if they don’t see some kind of metric that proves there’s a market for your product.
At the same time, though, you can’t predict in advance what kind of shape this product-market fit will have – in other words, before finding the set of features that will make you raise VC money, you’ll have to change and iterate your product a lot. And while you’re busy doing it, your money is running out and your competitors are trying to find the very same thing, so you have to be really fast.
This ultimately means that you have to keep things simple: you can’t change a lot and do it fast if the product you have to steer is a Titanic. This “keeping things simple” is the MVP: a very small set of features that are considered complete enough to prove to investors that there’s a fit with a market, while at the same time keeping its complexity as low as possible because you can’t know in advance when this fit will happen.
Reframing the problem
Designing an MVP is a complex balancing act that can be reframed as a constrained minimization problem: you have to minimize costs, with the constraint of still being able to verify whether your biz idea works or not.
We can split the problem into two parts, from which descend two different sets of responsibilities:
The constraint – ie. the product specs, the definition of the features that you’ll have to defend during an investor meeting – is the responsibility of the product team (most likely: the founder in charge of the product)
The minimization, on the other hand, is under the control of the tech team (usually the CTO), which gets the specs and has to find a way to make them happen
Let’s make the reasonable assumption that defining the specifics is not a problem: in this context, the “constraint” part of the problem is a given. People far more experienced and smart than me have already written plenty of content on the matter.
What about the “minimization” part, though? We need to find a set of principles that we can always hold true – a mental model. Later on, we’ll see the drawbacks associated with going forward with this way.
MVP tech implementation: a mental model
Given my (limited) experience, and for what I’ve seen other successful fellow founders did, I propose three principles for first-time CTOs:
1. Minimize the number of code repos
Multiple code repositories, with different languages, different specialized developers and different dependencies, imply having significant coordination costs, management boilerplate and sunk costs that are difficult to get rid of when pivoting the product and thus the team composition.
Usually this happens when you are building a product for different platforms (eg. Android, iPhone, web, desktop, and so on).
From what I’ve seen, though, the growth and retention advantage that you can get by expanding your user reach and by improving the product quality thanks to a native UX can’t offset the significant jump in costs that you’ll incur.
Along this line, you should also try to avoid having a backend at all, if possible: most of the times, you just need a database and a few functions that react to certain user behaviors (eg. triggering a push notification when a message is sent). I’d avoid setting up a fully-fledged server if you just need something to such effect: the amount of work needed to secure your connections, implement a deploy policy, work out the permission details, write down the APIs signatures, the architecture and infrastructure, is just not worth it.
Services like Firebase Cloud Functions and Database As a Service are perfect for this kind of need.
2. Minimize code that is not visible
I know that this doesn’t sound like a solid engineering principle, and I’m not saying it is, but it’s a quite effective rule of thumb: when you minimize the amount of code that doesn’t directly and visibly affect your users, you’re effectively prioritizing fast deploys over your ability to scale.
Speed and scale are usually two competing priorities when designing a technical infrastructure. The more you prioritize scale, the more “invisible” code you write – and this ultimately means being dragged down by something you don’t critically need to survive.
You shouldn’t really care about what happens when you’ll have 100K concurrently active users. In the past, this was especially true, because hardware upgrades were extremely delicate operations. Today, luckily, we have services like AWS and Google Cloud that grant us the ability to go from the complexity of a hackaton project to a Netflix-sized infrastructure with a single click. However, aside from hardware specifications, scale-related things – the like of thorough logging systems, wide orchestrators or even continuous integrations – should be significantly de-prioritized.
Even documentation should be somewhat discarded, unless it pertains to the company’s core technology. Documentation, in the end, is a coordination tool, and there should be close to no people to coordinate at this stage, as well as no piece of tech to understand that can’t potentially change in the span of a few weeks.
Finally, another aspect of prioritizing speed over scale is trying to remove as many layers as possible between those who define the specifics and those who implement them. No designer, no external consultants, no agency, no complex product management processes should stay between the founders and the coders. Yes, you’ll end up having less structure and formality (vital when dealing with multiple teams), but you’ll also definitely gain in speed.
Corollary: outsource non-critical complexity
When building an MVP, it’s totally ok to use third-party tools, even if they’re branded with another company logo.
For instance, I wouldn’t spend a single second of my time creating custom webpages for questionnaires: there’s always the good old Typeform. I wouldn’t worry about building my own authentication mechanism at launch: it’s fine to go with off-the-shelf tools like Firebase Authentication, even it means relying on Google to have all the users accesses managed.
It’s true: this means relying on external companies to perform core product activities, and the result could be paying in the future a hefty price – since usually the costs of using third-party tools linearly increase with the user base growth.
However, at this stage, this ultimately means outsourcing a level of complexity with very few advantages, because fast development is far more important than future ability to scale.
3. You (almost) only need numbers
A kind of derogation to the previous two points regards analytics. What is core and shouldn’t be discarded at all is an analytics system you can trust.
How to spot and prove product-market fit is a topic for another essay, but for now let’s say that you need engagement metrics to do it – the like of DAU, MAU, stickiness, L28, cohorts, attribution, retention, etc. I’ve seen plenty of founders not caring enough about them, and not thinking the tech stack with such KPIs in mind from the very beginning.
Technically speaking, this means including since day 0 analytics SDKs like Google Analytics, Firebase, Facebook Analytics, and whatnot. This is the only point where in-house development can sometimes be a better choice than off the shelf tool: I’ve seen some VCs asking for quite complex data analysis of product usage engagement that you can’t possibly perform without having access to raw data. However, just so you know, Firebase and BigQuery can do pull off the job quite nicely from what I’ve seen.
A good generalization: avoid native development whenever you can
This is mainly true for mobile development, but it can be extended to desktop development as well.
Native mobile development, eg. coding the Android client in Java/Kotlin or the iPhone one in Objective C/Swift, can be a very painful choice when working on an MVP.
It makes the specialization costs balloon, with dedicated procedures for each client:
you’d have to coordinate different developers basically doing the same thing
you’d have to adapt the very same feature to different environments, both from a technical and UI standpoint
you’d have to track and then consolidate potentially different metrics sources and KPIs
you’d have to run very different performance marketing campaigns
you’d have to work on different marketing assets for each platform, potentially to the point of customizing screenshots because you designed relevant UI differences
It just doesn’t make any sense, so I’d strongly advise against such choice.
Frameworks like Flutter and React Native are totally down for solving the problem: they’re cross-platform, offer a native UX in the face of a single codebase, Google and Facebook are behind them (respectively) - meaning that all the most important metrics you might need are collected with basically just a flag that has to be turned on.
Both of them even offer a native way to bring a very significant portion of the codebase to desktop environments and progressive web apps, with things like Electron and Project Hummingbird, in case you might need it.
The risk of technical debt
This is all good and well, but what happens after the MVP stage?
The risk of tech debt is very very high: as time goes on, the benefits of a lean and tight architecture go down, while the need of scale, structure, good UX and specialization go up. You’ll get to a point where the two forces invert, and you’ll need to switch.
As with every debt, the sooner you pay it back the better it is. That’s the reason why a very important factor that founders (and CTOs in particular) have to take into account when fundraising and roadmapping is the time and money needed to pay back the tech debt.
This means potentially re-writing the client(s), introducing a solid backend infrastructure, establishing good CI procedures and documentation, setting up a design system, and so on. Don’t fool yourselves and your investors pretending that this won’t happen – it’s money well spent.
If the MVP goal is to prove that “this might work”, between pre-seed and series A you need instead to prove that you can scale.
And you need the right structure to do it.
Stay in the loop
I've a non-lame 0-bullshit newsletter on startups, tech-enabled scalability and data-driven impact. Sign up to stay up to date and to keep this conversation open – it'll be worth your time and your attention
If you know someone that can benefit from what I've written, send them the link!
Sharing is caring 💌
If you have any feedback, be it positive or negative, I’d greatly appreciated if you'd let me know it! I heavily rely on them in order to get better, so thanks in advance