The European Super League in football is all but dead.
Late last night all six founding English teams decided to abandon the ‘league’ before it got off the ground, and this morning one of the villains of the tale, Fenway Sports Group, issued this apology to everybody involved at the club.
You can look at it in many different ways, but I think the apology holds an important lesson for all:
This is what you risk happening when you try to get a solution to market without having any real understanding of the customer demand.
While it is clear that the 12 clubs originally involved in the plans have a massive worldwide fan base, the die hard supporters – the core of every CLUB – have shown nothing but disgust for the plans. And the ‘plastic fans’ are not enough to make the plan viable in any way.
Let’s look at this as a classic example of a large scale experiment gone horribly wrong.
When you’re trying to solve a problem for someone, it helps a lot if you can empathize – even feel – the problem yourself.
Because it’s when you have a real sense of the problem, you release all those creative juices that allows you to not only look at the problem from different angles but also come up with ideas for how to try out different solutions in easy, creative and quick ways.
On the other hand, when you don’t feel the problem, it can be hard to not over-strategize and overcomplicate how you go about trying to solve it.
It just doesn’t feel natural to you, and when you’re stuck creatively, your only fallback option is the complex process, you bank on to see you well through to the other side.
When you do feel the problem, what you need to do next becomes more natural to you. You have an easier time setting the necessary wheels in motion, getting people onboard to help you and in general just get s*** done.
So make sure you can feel the problem before anything else. It will make the road ahead so much easier.
One of the things we’re constantly looking for, when we’re talking to potential co-founders, is the ability to fall in love with the problem, we’re looking to solve. Either straight off the bat – much preferred but rare – or as something to grow easily into.
But is love of the problem always that great? Or does it need to be balanced out in some way?
The questions are valid insofar as one of the key contributing factors to startup failure remains building something nobody wants. And doing precisely that is what you’re very much in risk of, when you have fallen in love with the problem.
Because you want to solve it so bad that you jump for your first idea, give it your all, get it released and then…nothing.
When you have fallen in love with the problem, the hardest part is to remain true to a good and thorough discovery process.
You need to always think that even if you think you have already figured everything out, you know essentially nothing. And the path to that knowledge runs through lots of hypothesis, experiments and iterations while working into your offering what you learn along the way.
While it is easy to say, it is super hard to do in real life. I know; I struggle daily. But nonetheless I still try to be fully aware that the best way to ultimately help solve the problem, you have fallen in love with is to do it the right way.
And not fall of a cliff due to pure love and passion.
When trying to understand a problem, it’s potential solutions and what you should build in the end, it is so easy to loose the big picture of what you’re doing and how that translates down to experiment by experiment that moves your product closer to something desirable.
But it doesn’t have to be that way.
One of the things I have found extremely useful is to build out an Experiment Roadmap; a sequence of experiments I think I am going to run in a particular order to get to the insights I need, before I feel confident in what our MVP should include.
The roadmap is important to have in order not to loose track. But it is not necessarily the actual roadmap. Because as we go about experimenting and being open to digesting our learnings and move on from them in the best possible manner, our roadmap changes.
So in fact we end up with the theoretical roadmap and the real one.
Why not just have the real one then and forget about trying to outline it in the first place?
Because outlining your thought process and your path towards anticipated learning and validation is an excellent catalyst for my own thought process. It ensures that I think about how NOT to fall into the abyss of just building what I feel, we should be building, without any prior experimentation.
In order words: Laying it out in front ensures that we follow the path of generating insights and validated learnings, before we build. And the actual roadmap of experiments is how the journey to get to the MVP actually forms.
By doing it this way we also get a chance of comparing notes and learn from our approach as we go along. What was the difference between ‘thinking’ and ‘doing’ and why do we think that was the case.
Those answers may be able to serve us very well and make us sharper, better and more efficient going forward. At least that is what I am betting on.
It is not uncommon to see new products launch with a lot of features. Too many features, perhaps.
The rationale is fully understandable; there is an urge to get a ‘finished’ product out, and you quickly form your own opinions about what that means in terms of feature set.
The underlying rationale behind it is more important though:
When you launch with a lot of features, essentially what you’re hoping is that there will be something in there that will get customers to love and adapt your product and not just turn the other cheek – or not notice it at all.
I think it’s fair to say it’s ‘fear of failure’. Pure and simple.
And I also think it is fair to recognize it as such. Because when you’re developing new products and trying to do something that perhaps no-one else has done before you, your biggest fear is that nobody is going to like it.
Or worse: Even care.
In fact I will argue that it’s the key reason why so many still fail to test their ideas and assumptions before they go and build their first product; the basic fear of getting the idea rejected by the market, before it all even begins.
But then again, I don’t think there is a way around it. I think the only way forward is to ‘bet on less’ – even if it’s super tough – and do whatever you can to nail the things you do rather than risk being ‘all over the place’.
Thankfully, we do have tools such as the ability to identify our assumptions, form our hypothesis and run smaller scale, appropriate experiments towards them to get more insight and data on whether we’re on track to something.
Today we’re soft-launching a simple pilot of our latest project, which we call FIXDIT.
FIXDIT is for homeowners and home maintenance professionals who wants to get rid of all the horror stories about home maintenance and redecoration projects. It is a challenging area, and lots of homeowners put off doing projects for fear of getting into a quagmire. I know, because I am one of them myself. Which is why I thought it was an important problem to try to find a better solution for.
With FIXDIT we’re looking to bring the love back between home owners and maintenance professionals. We try out a completely new spin on the market and the dynamics in it, and time and – most importantly – your reception of the concept will tell whether this is the start of something more or just a stupid idea.
One of the easiest things is to get carried away by your great idea. For many aspiring entrepreneurs it just happens straight out of the gate. But even for those who have learned and accepted that getting to product/market-fit is an experimental process, it can be tricky to stay the course and be true to your process.
Staying nimble when you need to is a virtue. With an emphasis on ‘when you need to’. Because of course there comes a time – hopefully – where it makes a ton of sense to just do whatever it takes to hit it out of the park. Chances are though that that won’t be the first thing you need to do. And that doing it anyway may send you seriously off course – sometimes without the ability to recover.
A good way of staying the course could be to have a simple process drawn down. David J. Bland has an excellent one in a video here, where he connects Pirate Metrics for growth with experimentation and how to allocate time and budget. That is exactly what you need to make sure that you stay focused on the right things at the right points of time and that you stay the course and stay nimble, when you need to.
One of the things I spend a significant amount of time on is devising, designing and running experiments on various different ideas for new concepts. It is both fun and challenging.
The challenging part is mostly about not reverting to the same 2-3 types of experiments and use them again and again. But because it is wrong to do so, and you might develop bias. But also because there are actually a lot of different ways, you can design and run experiments based on what kind of hypothesis, you’re trying to (dis)prove.
For that reason I have built yet another Excel-model; a simple database of all the different experiments, we know and can run with titles, applicable stages, ‘how to’-recipies and our know-how and experience in running them with valid results. Using the filter option on that one quickly allows me to narrow down the list of useful experiment-types for any given idea, broaden our horizon – and generate better results.