Alexandra Lung & Jessica Gantier
Duration: 39 min
Views: 330
3 likes
Published: December 1, 2020

Transcript

[00:00:15] Okay, hello everyone. I'm Alexandra Lang.
[00:00:18] And hi everyone. My name is Jessica Gantier.
[00:00:22] And today we're going to talk about how to lead fast innovation and especially, um, ten shades of MVP. 114 million. This is the number of search results on Google when we look for the MVP topic. Maybe it's not very surprising because MVP or Minimum Viable Product has become a very buzzword, especially in the product world in the last years. Uh, but it's also a buzzword that is used in very, very different ways and for a very different terms, um, depending on who does it. Uh, the term was popularized by Eric Rise, who defined it as the a new product, a new version of a product, which allows a team to collect the maximum amount of validated learnings with the least effort. And a lot of times when I talk to different, like, different companies or different PMs, um, there is an aspect that is a bit left aside, which is the learning part, and especially the validated learnings part. And uh, that's why today we're going to talk about different shades of MVP and especially about lean experiments because these are the best ways of getting validated learnings.
[00:01:50] So today we'll be sharing how we use lean experiment to innovate fast by testing our IDs early and often. We will be sharing concrete examples from real life projects that we worked on and other innovating teams. And our goal today is to share that mindset with you and hopefully you'll get as excited as us about this topic. that can really help you, um, bootstrap your testing process and really minimize your risk of building the wrong thing.
[00:02:25] So first we have a little scene for you and, um, hopefully that will resonate with some situation that you lived.
[00:02:40] You want to start it, Jess?
[00:02:41] Sure. So, um, you know, Alex, uh, I think that we really should be building a chatbot feature. That's what the leadership is asking us. We should really be putting this in our roadmap right away. I think that's the great next step for our business.
[00:02:59] Uh, that's a, that's a pretty good idea, but, um, actually, you know, we don't really have time, like in the leadership team, we discussed and we really need to have this out, really, really fast. And in the same time, you know, we already know what we need to build, like we, we know our users, we know this industry. I mean, I, I don't think we need to, we need to test this. Let's, let's just go on with it. We, we don't have the time to do it.
[00:03:24] So very often, uh, we hear that we don't have the time to test, we don't have the budget for that.
[00:03:33] And many times we face a situation where we can't really go and talk to the end user as much as we'd like to. Um, yes, testing is an investment, doing research is an investment, it is true. But testing is the only way to minimize the risk of building the wrong thing, and in the long run, it will save you both time and money.
[00:04:01] So, why lean experiments? So, again, that's the way to minimize risk in your product development. And what you, usually what you will do is that you will want to innovate as fast as possible, you want to push ideas out the door before you competitors do. But no one wants to rush it and to push something that will never be used. The only way to be innovative and disrupt a market and change people's life is to build things that people will actually use, need and want to buy. So, um, another thing to stress out is that it's very important to test early. Because a lot of study demonstrated that the more time you invest in an idea, the less you will be willing to put it aside or to give up on it, even if you see clear signs that it is not performing well. All right, so lean experiments are a great tool to do two things: test an idea or explore a topic. By testing an idea, I mean testing your assumption, your hypothesis, and that can be design assumptions but also business assumptions. So some concrete example would be we believe that by building a new onboarding, uh, feature, a new onboarding flow, we will boost the, the conversion rate and we will have more product adoption. Or for instance, by building a new referral program, we will boost our acquisition, user acquisition. And some example for exploring a topic, so when we talk about exploring a topic, it's more in that explorative phase where you're trying to understand your problem space. For instance, what do our users need? What are their pain points today? But also how do they perceive our brand? Or how do they see us in regard to our competitor or how they react in real life situation when looking at our product?
[00:06:16] And the key idea behind lean experiments and what makes this tool so powerful is that you want to create a way to learn in the most efficient manner, which means minimizing the costs, and increasing the speed of testing as much as possible, but also having clear, tangible, measurable results, as you would have in a scientific experiment.
[00:06:46] So, now the question is, where do we start? As we said, we always have tons of ideas, tons of assumptions. Um, and you, we cannot test everything that will not be efficient, uh, we just simply can't do that. So what we start with is to identify the riskiest assumptions, the riskiest assumptions or what would kill our project if we are wrong about this. What seemed like an absolutely awesome idea but is extremely time, like, you know, time consuming, would really be complex and very expensive to build, even if we just stick to a POC or an MVP version. So what we would do is first a prioritization exercise and really focus on what are our top three riskiest assumptions that might kill our project or business if we're wrong about this.
[00:07:45] Okay, so now let's get to how do we do this. There are five steps that we're going to talk about in order to run lean experiments. The very first one, uh, might seem an obvious one, but we don't, teams don't always do it. Uh, the very first one is to really, uh, first acknowledge that whenever you have an idea of a solution or of a business new solution or product feature or so on, this is at that very first stage just an hypothesis. We think that that new feature will help drive an more engagement. We think that that new referral program will drive more business value and so on. So these are just hypotheses. So it's really important at this first stage that you after you've knowledge it, you really write down your hypothesis with your team and you make sure that you are aligned on exactly what this hypothesis means for you. So the way we, uh, the way you usually frame it is really as an hypothesis, we'll start with we believe that. And here I will take an example of a, of a of a chatbot solution. So we believe that by showing a chatbot on every page, the clients will find the information they look for faster.
[00:08:59] So, now that we have our hypothesis written down, the next step is to think what is the experiment that we want to, we want to run. And when defining the experiment, we will always keep in mind what is the experiment that can bring me the most learnings about around this, um, around this hypothesis, but also, uh, you know, is this, can I do this? Um, how much, how much cost does this involve for, for me? Is this something that will need to, still needs development that will need to develop for two months or is it something that I just, I can draw something on a piece of paper and I can test it right away? or is it something that's just I can draw something on a piece of paper and I can test it right away.
[00:09:39] And an example, like, going back on our example, on the hypothesis with the chatbot, an idea of a test could be, we are going to create a low-fidelity prototype, and we're going to ask five users what they would do to contact our company. So in this case, the test is the low-fidelity prototype and we make sure that we also define the scenario already here when we, when we're looking at our hypothesis. So you will say, okay, you have your hypothesis, you have your test, well, you're ready to go for it and uh, and run the test. But not so fast. There is an important extra step you need to take, which is to clearly define what will validate or invalidate your hypothesis. Why you need to do this at this stage is simply because you want to avoid biases. If you don't define this at this stage and you just go and run your experiment, if some of some of the the answers of of of the participants will start validating your hypothesis, you will be very keen on saying, well, that's okay, and you will start getting more attached to your hypothesis as well in some cases. So really important, define this, um, this validation criteria and define it in a measurable way, in a measurable and objective way.
[00:10:59] So here, if I go back to our example, uh, an example of the validation criteria could be, four out of five users click on chat. So this is something you will not be arguing afterwards, it's really we count how many of the users actually click on it. But in some cases, it's important to mix this, um, this metric, which is very, um, very, um, a number, very measurable, to something that we, we call observable, uh, like more of an observable behavior as well. And in this case, what could be interesting is to look if the users observe the chat on a first sight. And mix it to, they click on chat. Like, you need to take into account the fact that when you are in a, in a user test, the users are there to actually run this test with you and they will put more effort into the task that you're asking them to do. In a real-life scenario, if they don't find the chat right away at their first sight, they might not want to just, you know, uh, roll back and forth on the on your on your site and, um, and try to find that button. So, um, just keep this in mind that I think the measurable one is always important, but in some cases you want to have some observable, um, criteria as well. So now that we really have all this, uh, let me give you a little glimpse of how we usually get to this. Um, it's, it's a collaborative exercise that we do with the team, like the product manager, product designer, sometimes we involve some, um, some of the engineers or, or some external stakeholders. And usually we, we start from an hypothesis that is already defined by all of us and, um, the different team members brainstorm on the tests and think of the different validation criteria and the costs. And just having different ideas will help us at the end compare the cost and the maximum learnings by having all these different ideas on on pieces of paper. And of course, also the time collective, um, ideas and knowledge can, can bring to really great results. So once we have defined all this, we are going to actually run the experiment. And once we run it, we get to our last step, which is analyzing the results and taking the decision. So, um, normally, if your validation criteria was really measurable, um, analyzing the results will, will be a pretty straightforward and quick task, and, uh, so, so will be the taking the decision. Something, something important into the way you see the lean experiments and you take the decision is to, um, to be very aware of the fact that experiments don't fail, it's just hypothesis that are proven wrong. Like a lot of times, I've worked with teams that are afraid that if an hypothesis gets invalidated or is wrong, it's like they, they have failed, they, they have thought of something that was, wasn't true or was wrong, and they are really afraid, like they will, they will try to sometimes look at data in a certain way that, that will make that, that hypothesis seem true actually. But the fact that an hypothesis is invalidated will actually stop you from putting time or money or usually both in something that will not bring any value to your business or in some cases can even kill your business. So that is actually good to be able to invalidate those ideas that are not the the right ones. So just keep in mind that the only, um, the only way you can fail as a as a business or at launching a new product is by not learning. So make sure you focus on learning, which is the, which is the most important part.
[00:14:45] All right, so now let's dig into the toolbox. So now this is going to relate on that step three where you define the experiments when you want to find the right way to test your idea. We'll be sharing some very concrete examples and explain the different type of experiment you could be using, explain what they are good for and also give examples.
[00:15:17] So the first type is low-fi prototyping and this is actually the example we used, uh, there to test the chatbot idea, it's to have a prototype. So prototypes are great, they are cheap, they are fast, you can open Figma in vision or whatever and have something crafted there very quickly, especially if you keep it low-fi. And, um, so sometimes it's the exactly perfect way to test your idea. But sometimes you want to try to test something a bit complex and it's really hard to mock it up with a prototype, or sometimes what is crucial for you is that you test your idea in real condition, not in the setup of bringing a user with you at the table during a one-on-one user testing session. So that's why we want to share a lot more example, a lot more ways that you could be testing your ideas.
[00:16:18] And talking about testing in real condition, here is AB testing. So AB testing is, I think it's pretty popular lately, like we hear a lot of examples. And, um, it's really, it's really interesting, um, experiments to do if you want to test more at scale. And also test in real life. So your users will really be in your app or on your website, they will really be, um, acting as they, as they do in normal, on a normal basis. And, uh, this will actually let you just compare your different versions, um, in a, in a very, in a very realistic scenario. What's also interesting about AB testing is the flexibility. You can, um, a lot of times we think about, um, action buttons when we, when we think about AB testing, but we can test a lot of things. And we can test really, let's say, small things, but that can have a huge impact for a business, which can be sometimes just a letter or really, um, small change in a, in a wording, or sometimes we can, um, we can test really bigger things. An interesting example that I, that I've seen not long ago was using AB testing for, um, for testing trial version, trial, trial length at the beginning, um, of user acquisition. So there's this company, they didn't know if, um, the ideal trial length for, for a client was two weeks, four weeks or six weeks. So they actually used AB testing in order to, uh, uh, to get the answer to, to this question. So just be creative on how you want to use your AB testing. And an example on of a project that just myself worked on was an online help center. So we were working for a, for a Telco company and we were we were redesigning their help center, their online help center. And our main question was, what's the best way to navigate to our new help center? So it was really about the entry point, where would the clients want to start. And we had different ideas and we decided to push the AB test to ABC test because we had three different interaction patterns that we wanted to to look for. Uh and to test. The one was a search in the header. A second one was, um, was a help center dropdown. And, um, one, um, third one was a floating widget on every page. So actually doing this test helped us gather a lot of insights about usage and, um, how how clients start their journey and helped us choose one of the solutions, which was, uh, in our case, the dropdown menu. where the clients want to start. And we had different idea and we decided to push the AB test to ABC test because we had three different interaction patterns that we wanted to to look for and to test. The one was a search in the header. A second one was, um, was a help center drop down and, um, one, um, third one was a floating widget on every page. So, actually doing this test helped us gather a lot insights about usage and how how clients start their journey. And helped us choose one of the solutions, which was, uh, in our case, the the dropdown menu.
[00:19:05] Jess, you want to go for a smoke test?
[00:19:07] Absolutely, yes. So smoke tests. Uh, let's talk about smoke tests. Smoke tests are a family of testing with the same concept. So you would build a fake entry door to a service or a feature that does not exist yet, that you're thinking of developing or building, but that you have not developed yet. And for instance, um, you could be building a fake landing page, or a fake button that redirects to nothing. And this technique is really great to help you track the traction that your idea has at a large scale, but also test it in real-life conditions. So let's dig into pivotal tracker example, for instance. Pivotal Tracker is a tool to manage backlogs for product development. It's aimed towards product managers and developers. And at some point, uh, they reached a level of maturity where they were wondering, what is the next big thing for us? What is the next big feature that's going to change the game for our users? And they thought of this feature that would like a whole self feature that would help team alignment and several teams working better together and handle their their dependencies. And they really struggled to validate that this would be valuable enough for their user to for them to invest for the next few months. So what they did is that they made a fake blog post offering to subscribe to a beta for that feature that did not exist yet and there was absolutely no beta actually. And uh, they used just this very simple form where people could subscribe and they tracked like how many people subscribed. And uh, they used just this very simple form where people could subscribe and they tracked like how many people subscribed. And that um, subscription form was just telling them, okay, thanks for subscribing to the beta, we'll let you know if you actually uh, uh, a good candidate for the beta or not. And what happened is that they they the test was
[00:21:15] was for them successful because they reached the conversion rate that they had determined in the beginning, which was 10%. And the other other great, um, part of using this test and this type of testing, is that they had now a list of users that were interested in this concept, in this value proposition, and they used this list to reach out to them and do some more exploratory research and refine their solution and their ideas as they went.
[00:21:49] Concierge and MVP, this is a really good way of testing and actually refining your solution while testing. It consists in um, in a test where the actions are done manually by um, by a person and uh, the end user is aware of it. The the downside of this test is that it's not scalable, of course, as it's all manual, like you'll need to actually implement automation afterwards or or development, but it's also the advantage that it's that is really easy to put in place, it's not costly and you can learn a lot and you can adjust a lot of your solution on the on the spot as well. So a few, a few examples here. Um, first one is a food delivery service. For a major supermarket brand in the UK that wanted to develop this high-end delivery service. And their question at the beginning was related to logistic actually, it was a really key one, one of those um, risky ones that Jess was talking about, this could actually kill this feature or this new offering. So, is it possible to deliver in less than 60 minutes? So basically what the team did is that they had a client that was participating in the research that created their gross grocery list in Google Keep. So manually, they just took note of all the different groceries on their list. And a member of the team used a GoPro and went and prepared the order and delivered it as well.
[00:23:17] So what was really interesting here is that not only they validated the logistical feasibility of the service, but they only discovered they they also discovered a lot of uh, pain points and insights around the preparation of the order and the delivery as well. An example was, um, for example, when someone orders bottles of water, those are heavy and will not be able and pretty big as a as a whole, so they will not be able to be delivered by bike, which is one of the main means of um, of delivery that was imagined at the beginning. So there were a lot of different cases that weren't taken into account at the beginning and that came out from this test.
[00:24:02] Another example is an energy provider. Uh, it was a startup that was at the very beginning of their journey and they had a lot a lot of ideas for their value proposition. So, um, they were really the founders were really asking themselves like, which of these value propositions will resonate most with our target? So what they did is that they created three different landing pages with each of the each of the value propositions and they launched three Google AdWords campaigns. And of course, like each of these land landing pages provided a way to sign up for the service as well. So, um, they during by just by doing this test really quickly, they could pick a value proposition that had really way better conversion rate.
[00:24:53] All right, so the wizard of oz now is a concept that is quite similar to the Concierge MVP. In a way that you will pretend that your system uh is a complex AI doing complex things while performing the task manually. But this time, the end user has no idea that they are interacting with uh, a human being, so they will think they're just talking to the machine or using an AI. And someone you in your team would be doing the action manually. And this is great for refining a solution, um, and designing a solution that is really complex. So think chatbot, AI, this is the perfect way of testing, um, how you would, how your chatbot would interact, the tone, what type of, um, what type of proactive things could you offer the user and see how they would react. So this can be a very powerful way of um, testing complicated things to mock up. One good example is Zappos.
[00:26:04] So in the early day of e-commerce, Zappos creator was wondering if people would be willing to buy shoes online without trying them, touching them, seeing them in real life. And what did he, what he did, uh, instead of go and buy a whole stock of shoes that people might never buy and being confronted to hold the logistic problems of having a stock. He went to uh, the the next door shop and took pictures and after having a bunch of shoe picture, he uploaded a catalog on his website. And see if anyone bought the shoes and see how it went. And what happened is that when someone actually purchased on his site, he would just go, run to the shop himself and then buy the shoes himself and then put it in a box and send it to his customers. So of course, that's absolutely not, absolutely not scalable, that's not uh, what he aimed to do in the long run. But that allowed him to validate that people would be willing to buy shoes online. And another good thing at this stage of his business, this very early stage, is that he was able to understand what type of shoes were in high demand and what things should he be prioritizing when uh, purchasing his actual stock.
[00:27:32] Another, okay, another good example of that is a startup called Aardvark. So that startup has been purchased since, um, and acquired by Google, I think. Um, but in their early stage, so that that's like a social engine and it's pretty similar to the concept of Quora. And at a stage of their development, they really wondered what type of feature should they prioritize, so they, um, bring in more growth and encourage more interaction with their product and retain their customers. And then we're wondering, what should we do? What should we, what, what should our algorithm do? What should we focus on first? What they decided to do is that for nine months, interns manually performed what their algorithm would do, uh, in their long run. So they reviewed the conversation, they categorized them, and they moderated manually the discussion threads.
[00:28:34] And what they learned while doing this, uh, is the correlation, you know, the patterns between what's um, power user do, people that stick to the platform that keeps on using it, are like kinda demanding that platform and coming back, um, they see that this type of people tend to be engaged with multiple threads at the same time. So they had after this nine-month experiment, they had a good solid idea of like what are the different category on conversation that happened, what type of things, uh, should they scale and automatize with their algorithm and how to build them, but also what's the key thing, what's the key uh, things they could should encourage so that their user stick to their solution and keep on coming back so that they can generate growth. And during that, uh, Wizard of Oz phase and that like very manual and and scrappy phase, they still managed to raise 30 millions.
[00:29:34] Wow, nice example. Um, another, another way to test, uh, is to use competitor testing. So if you are thinking about a feature or a different capability that your competitors already have, you can sometimes use your competitors' products in order to actually test with your users. And uh, what's interesting in this kind of uh, testing is that not only you will get insights about um, a solution, you will also get uh, a lot of information around how your users see you, how your users see your competitors, how your users also see the brand of your different competitor or, well, if you choose only one of of that one competitor. So, uh, pretty insightful way of testing. A lot of times we think about it in an online way, but I have an example, um, of competitor testing done offline as well.
[00:30:27] So, um, uh, quite a few years ago, I was working for a car rental and um, I was working on the repositioning of this, uh, of this company. So building the value proposition, and the service offering. And the question that we had were was, what are the needs and the behaviors of the client arriving at the airport? So basically most of our clients were business people that were arriving at the airport, so that was their first actually real contact point with um, with the the rent car services and um, and with the service itself. So, uh, what we did is that we actually spent, um, uh, almost a day at, uh, at the airport, shadowing competitor rental car companies. So we didn't, uh, we didn't have to rent, uh, space at the airport ourselves, we didn't have to go through tons and tons of research. We actually just spent a few hours, uh, acting as if we were the clients of, um, of just standing in line and listening to all the different questions and, uh, all the different, um, options that are that that the that the rental car users were actually, uh, sharing. or asking to the to the to the other company. So this was super insightful because we, not only we have learned a lot about their behaviors, but we also learned a lot about the questions and the services that they are looking into.
[00:31:56] Oh, and the funny name one.
[00:32:00] Yes, so picnic in the graveyard. Um, what is it? Okay, picnic in the graveyard is about reaching out to people that had a similar idea to yours or a similar business that that failed. And the reason you might want to consider this is that um, that will allow you to learn from them, to learn which pitfall you should be careful about, which pitfall you should avoid. Um, but also what opportunities they pursued or not so you can kind of cross-correct your strategy, adapt your business model or try something slightly different and avoid redoing the same mistakes that other people have done before you, basically.
[00:32:51] So we've shared a few, a few examples in order in order to just see how, how different and how far you can get into into testing. Like a lot of times when when we talk to different people in product or design, um, we hear we hear prototypes is the first, um, first way of testing, um, but there are so many other ways. So our our message here is really be creative, like depending on on the stage of your product, on your product, on your company, there is always, um, a way and a different way of being able to, uh, to run a lean experiment. So, make sure you are creative and in order to enhance this even more, we have two last examples on how you can get really creative. So, um, the first example is a startup that, uh, was creating educational videos for piano players. And they were just starting their business and they had a big question that was very, uh, fundamental for their business, which was, what is a good educational video for piano? So, uh, what they did at the time, they had no, well, they had no video, they didn't have the means to actually create a lot of videos. So they decided to leverage existing, uh, existing videos on YouTube. So they used a service which is called Mechanical Turks, who, um, where different people around the world for for a certain amount of money, which is pretty low, they will tag different popular videos on YouTube. And these Mechanical Turks, they were watching each video and they were answering the questions, what is it, what category and what characteristics. So, by analyzing the correlations between these different, um, questions they were asking, the popularity, the category and the characteristics, they were able to, um, to understand what would make a good video. But also what was super interesting here is that they didn't need to generate any data of their own, they could just use the public data and that was a great start base for them into learning more about what would make their business successful.
[00:35:03] And another example that is pretty low-fi is this kit page that we did for one project with a Telco company. So we were building a new user dashboard, um, and we had a lot of very divergent idea of what should be in that dashboard, like what's the very essential thing that the user should be seeing right away when connecting to their account. And uh, really, like we had endless debates about what should be there. Um, so we decided to go for this little kit page experiment and to spend ten minutes of our forty-five minute user interview session to give little blocks of paper to to our participants. with a blank blank page, so this blank, um, page with just the navigation bar on top, little pieces of paper, and ask them to put what they think should absolutely be there. Um, and and we observed them, we arrange it, they kind of created their own components when hours were not enough, we kind of readapting here in this, if you see that the frequently asked question was too long and they decided to to make it shorter.
[00:36:17] And after only five interviews, uh, we already had very clear patterns about um, what were the top feature, top information that they needed in there. And there were even information that were not on our radar and that were super important for our for our users.
[00:36:40] I love this example because I think if we would have went with a prototype and just with some of the assumptions, we would have would have had to go through two or three tests just to to get to this same amount of learnings as here.
[00:36:56] And that really helped going faster with the debates. All right, so let's recap a little bit what we talked about. Um, so we hope that we convinced you of the importance of testing your hypotheses, testing your assumptions. This is critical and and you you should really be minimizing the risk of building the the wrong thing by testing early and often.
[00:37:22] And we also gave our framework and the five steps that we follow to define and run experiments. So first, the important thing is to identify your hypothesis, be aware of your hypothesis. The second one is to brainstorm on an experiment, how to test that hypothesis. Third one is to define your success criteria. The fourth is to run it, of course, to run the experiment and five step, five fifth step is to measure the results.
[00:37:59] And really, what's important is to cultivate that mindset and to think out of the box, don't jump straight away into building the future the feature, of course, but don't jump straight away to prototyping and feeling stuck into that method of testing, there are a lot of great ideas out there and you can mix them, combine them, look for inspiration everywhere. Uh, really look at what's is right for you at the stage of your development and what makes sense in your context.
[00:38:31] So we're getting to the end of our presentation and, um, Jessica and myself would like to leave you with one question. If you think about the different ideas that you're debating right now with your team, how will you test those?
[00:38:52] Thank you very much.
[00:38:56] Thank you, everyone.