Rachel Dubois
Transcript
[00:00:07]
Hello everyone. Yay! So, three things. First,
[00:00:15]
I'm totally French, but I was asked to speak in English today. So, bear with me on that one. This is a brand new talk. I haven't given it anywhere else. So, you are kind of guinea pig that for me? Feedbacks are welcome.
[00:00:35]
I'm also, I had a pneumonia, so I'm a bit sick. And sometime that shows. So bear with me on that one.
[00:00:45]
Let's start. Beyond Agile.
[00:00:49]
So, what if everything you knew about Agile, product management and innovation was wrong?
[00:00:58]
Yay! When we think about product innovation in tech at least, we have that idea in our brain. This is kind of magical super powered of very creative designers in very agile and nerdy and techy companies. And the usual connection.
[00:01:23]
Innovation equals discovery equals Agile. But today I'm not here to talk about that as often the way we hear it. I'm not going to talk about dual track discovery and delivery. I'm not talk about sprints, framework, velocity and rituals. I'm here to talk about what separates very great product company from the rest of us. Something that is really fundamental, which is impact.
[00:01:59]
And also, I'm going to try to show you that you are maybe trapped in some kind of illusion.
[00:02:09]
Some time ago, I was working for a company, I'm a consultant, I was working for a company that was like the Agile perfection. I'm an Agile coach. So, dreamland. Agile, of course, they had like well-paced sprints, demos, retros, textbook, really. PMs, POs, Scrum Masters, coaches, OKRs, everything, all the Agile magic. And honestly, really an example of what a modern Agile company should look like, right? But then, I went to the leadership team and we looked at the numbers.
[00:02:52]
And well, not that good.
[00:02:58]
Zero impact. I'm not saying like not some of impact, no, no, zero business impact. KPIs were flat for the best, declining else. And features, beautifully shipped in a digital graveyard.
[00:03:19]
Panic was palpable. Rachel, we are doing everything right, right? They asked.
[00:03:30]
Yeah.
[00:03:33]
You do everything well, but the wrong things.
[00:03:39]
Efficiency without learning is just waste.
[00:03:43]
Is wasting money, time, everything.
[00:03:51]
Let me ask you something.
[00:03:55]
When was the last time that you really measured improvement after having shipped something? Real customer behavior change.
[00:04:09]
I'm not talking about the numbers of feature you shipped. I don't care about that. I don't care about points. I don't care about the numbers of sprint you succeeded. I'm talking about real measurable business impact.
[00:04:28]
Try to remember that. Got it?
[00:04:33]
Hopefully it was not that late, I hope.
[00:04:39]
The issue is that we tend to confuse speed and volume when it comes to delivery with innovation capability. We kind of mistaken motion for progress.
[00:04:53]
And by doing that, we are completely losing sight of business.
[00:04:58]
And the company that truly innovates making changes on the market, they operate in a totally different universe, a totally different reality.
[00:05:15]
My name is Rachel.
[00:05:18]
I've been working in tech for the last 25 years, mostly as product lead, uh coding was not really my best talent to be honest. And as an Agile coach and product coach for maybe the 15, 17 years, okay? I have been lucky enough to work among product teams at Spotify and Vinted, which are two amazing product led company, two amazing unicorns. And also, I've been lucky enough to extensively interview PMs and teams at Amazon, Airbnb, and Google for the book I'm writing about product management. And my talk today is really about seeing behind the curtains.
[00:06:10]
Meet Anna.
[00:06:13]
Anna is fictional, of course, but she's very representative to the PM I've been working at Spotify.
[00:06:20]
So, we are going to follow her on her journey.
[00:06:28]
Anna doesn't start her week by opening Jira.
[00:06:34]
She starts the week and most of the days, to be honest, by opening backstage to access the dashboards.
[00:06:42]
Those dashboard, they don't show any progress of the project or delivery progress. They show data, and this data is really about product performance, technical performance, customers' usage.
[00:06:59]
And she look at those bars. What's hot? What's rising? What's dropping? What's surprising?
[00:07:10]
What is stuck?
[00:07:12]
She look at the raw data. She looked at the trends. There are no explanation, just signals. And sometime, and the red pops.
[00:07:29]
So she go into the team she works with.
[00:07:33]
She knows them well.
[00:07:36]
Including her, the PM, she works with engineers and designers and data scientists. They know each other quite well. They've been working on that product area for years.
[00:07:48]
And they have a clear goal. This one was communicated by the leadership team of the company and is very tightly connected to company goals and strategy.
[00:08:00]
Their goal is not to deliver some story plan in the backlog or any road map. Their goal is to understand users' issues and how those issues come in the way of company business success.
[00:08:18]
And today, the problem here is this one. Why is that shared playlist completion rate dropped by 8% within the last few days?
[00:08:34]
This is potentially money loss.
[00:08:40]
They have some usual suspects, of course.
[00:08:44]
They know their stuff, right? They know their shit like product area. They have maybe some hypothesis about that on why this key metric dropped.
[00:08:56]
But they want to keep minds open.
[00:09:00]
So they're going to launch an investigation. They start by analyzing the data they already sit on. Did they miss something?
[00:09:09]
Was there any pattern they didn't pay attention to? Maybe some data points they they haven't seen. Was there any patterns?
[00:09:20]
And then they go and also to look at maybe previous users interview they have made or previous focus groups, and they also look at surveys and trains and research provided by the data and intelligence teams. More on the macro trends and the market.
[00:09:39]
They also go with engineers trying to look at logs and technical monitoring reports. That might be an explanation somewhere, right?
[00:09:49]
And they gather the data.
[00:09:55]
Yet, they feel something might be missing.
[00:10:00]
Something is odd with the data.
[00:10:04]
Somewhere, there is something that is missing, a clue.
[00:10:09]
So they decided to go and meet customer support to try to see if in the complaints ticket, there might be something that was related to the function.
[00:10:24]
To really, really enrich the understanding of the situation.
[00:10:30]
They got some things, some complaints by some users, and they they decide to go deeper on that. So they contact the users.
[00:10:40]
The one that complained.
[00:10:44]
And to explore furthermore, they book some interview with them.
[00:10:50]
Short interview, recorded interview. And they try to better grasp the intent of the user when the problem, when the situation was painful for them. And they capture that using a framework that many of you already know, I guess, which is job to be done. To really understand what it is the customers want to do. And how they were blocked by doing so.
[00:11:20]
Based on that, they go back all together.
[00:11:25]
Brainstorming on the problem they haven't covered during the interviews, during the data analysis. And they try to turn them into opportunities. They use another way to do that, very classical again, nothing fancy, how might we, for example, works for that? Turning problems into opportunities, linking them, structuring them into an opportunity solution tree, for example, if you know Teresa Torres work.
[00:11:57]
And for each of them, they try to map their options. They try to also indicate their weight.
[00:12:07]
So that they can compare them.
[00:12:10]
So that they can prioritize them. So that they can really focus on what is more juicy for them to solve because you can't solve everything. So you have to focus.
[00:12:22]
And all together, they decide on a few of them, they bet on them.
[00:12:30]
They decide they are going to build some alternatives of the displays and the funnel the user go through. To stress their hypothesis and their understanding and try to see if there is a way to find a more efficient display or funnel.
[00:12:53]
Yay! Next step there is to elaborate, to transform the hypothesis in something that more tangible, right? Something that we can ship to users.
[00:13:06]
So here, two steps. First, transform hypothesis on something that we can really, really articulate our strategy on. They're going to use the DIBB framework. DIBB standing for data, insights, beliefs and bets. That's a very simple, lightweight way to capture that and to structure the knowledge and the strategy of the team. So PM and data scientists usually work on them. They have a kind of experimentation protocol if change then effect because rational data, very scientific way to do that. What is really important here is to make a clear connection between that and the change they expect from customer behaviors and the change they expect on the KPIs they are looking for, right?
[00:13:56]
For example,
[00:14:00]
If we change the position of shared playlist access point in the home page, then we might see an increase on that specific user engagement rating. If not, that means that's not the problem, that we need to change and pivot, right?
[00:14:19]
They build quite quickly the variants using Encore, so Encore is the Spotify design system that has the UI or also the behavior encoded, and this is directly available for everyone through Figma. Directly injected in it.
[00:14:38]
So they can really work engineers and designers in building those variants, those alternatives of the same screens and display and so on.
[00:14:48]
And all the code they create is put behind feature flagging, and the colorful representation there.
[00:14:59]
Feature flagging is used to control the display and the broadcast of the variants to users, right?
[00:15:08]
So this is done very quickly, a few days.
[00:15:12]
Because I don't want to miss the window of the weekly mobile, I'm sorry, the weekly mobile app release. So every Thursday, we wrap up, we release the app.
[00:15:23]
And ping, they succeed that, the code is shipped. So now the team can use through backends, they can set up the AB testing campaign. They can decide which sample of the user base, remember that Spotify has roughly from 600 to 700 million users, so we can segment that very easily. Make samples, define some control panels and launch AB testing campaigns.
[00:15:57]
Just one, no. We always talk about AB testing, like A or B, but real life is not that. Real life is more A, B, C, D, there are several variants that are shipped and test in the same day.
[00:16:17]
At any moment in time, there are 2,000 experiments running every day at Spotify.
[00:16:26]
And this is not the biggest company. Amazon, I'm sure, has way more than that.
[00:16:33]
But it gives you the kind of scale of innovation those company put effort in. And I think that's really something we want to remember. Remember that the goal here is to launch experiment to gain insight and drive the strategy forward. So we need to experiment a lot.
[00:16:57]
Few days later, the first results start to came to come in.
[00:17:10]
I'm sorry.
[00:17:12]
First results start to come in and we can see already some interesting patterns. So the team goes into the details of the results and try to see are they good?
[00:17:23]
Are they good enough, which is slightly different, right? Is there no harm?
[00:17:30]
Some variant are very good on one hand and can be little on the other hand on other KPIs and this we want to pay attention to. So the new design shows that some variant has no effect at all. Some of them improve completion rate. Whoo! But some others, they create damages. And they create damages on they increase the unsubscription of premium users, which is exactly what we don't want to have, right? We want to make money, so we don't want to decrease that.
[00:18:07]
So they have these like very passionate conversation all together as a team. Should we kill, should we scale? What it is that we need to kill, what it is that we need to scale?
[00:18:19]
And how we do that based on the findings.
[00:18:23]
They decide together, they don't blame. They learn together.
[00:18:31]
There's no stress really. Rollback is easy, just toggle off through back end.
[00:18:38]
Quick analysis, toggle off, what have we missed, what have we learned, move on.
[00:18:49]
Team will clean the unwanted variant from the codebase in the next refactoring session. And move forward to something next to explore in the opportunity solution tree. Because they still have a problem to solve.
[00:19:04]
Rinse and repeat.
[00:19:07]
It's not linear, it's more something like that.
[00:19:12]
This is a real product pipeline, where innovation flows from everywhere at every stages. It's not a rigid delivery plan.
[00:19:22]
It's not a visionary road map. It's a kind of a living system, if you made a metaphor. It's capable of sensing, testing, adjusting, reacting, learning. We're talking more about a kind of a breathing creature.
[00:18:56]
forward to something next to explore in the opportunity solution tree. Because they still have a problem to solve. Rinse and repeat.
[00:19:07]
It's not linear, it's more something like that.
[00:19:11]
This is a real product pipeline where innovation flows from everywhere at every stages. Is not a rigid delivery plan.
[00:19:21]
Is not a visionary road map.
[00:19:24]
It's a kind of a living system, if you made the metaphor. It's capable of sensing, testing, adjusting, reacting, learning. We are talking more about kind of a breathing creature.
[00:19:44]
And more important, this is a system that is designed to maximize learning, not to maximize delivery. I started this talk showing you an agile illusion in an company. Let's switch to real life, shall we?
[00:20:08]
Let's dissect unicorns!
[00:20:17]
The vest.
[00:20:20]
So, first, we need to agree on some basics, you and me. First one is, repeat after me, a product is not a list of ticket in Jira. Try it.
[00:20:38]
Yes.
[00:20:41]
A product is not a road map to execute. Okay? We really need to change the narrative there. That's painful. That's deadly. Change that. That's like 20 years ago. Move on.
[00:21:00]
Thank you. Thank you. So we don't manage backlogs, we agree on that. What we do is that we grow and maintain a living creature or product. One that sense, react, evolves, learn to survive and thrive. This is what we want to do. And this organism, like any organism in the world, they had like at least three basic systems: a nervous system, an immune system, and a circulatory system.
[00:21:29]
Nervous system. The nervous system gives unicorns the ability to observe, sense and feel the outside world. So that they understand it.
[00:21:44]
Without any nervous system, there is no perception, there is no action. A product without sensors, log and trackers, without functional and technical monitoring is blind and deaf. You don't want that. Really, you don't want that. At Airbnb, for example, they have heavily automatized the inside generation. So there are plenty of algorithm that detects every, what they call abnormal behaviors from users, or every evolving usage pattern. Every deviation from what they have observed, this is low, this is truck, this is pushed to you. Because those are small signals from the outside world that the world is changing. And we need to adjust to it to become, to stay relevant.
[00:22:42]
All these small signals are like that triggers alert, you can make the metaphor, it's like nerves reacting to pain.
[00:22:53]
Or to burn. You want to know if you are burning your hand, right? Same.
[00:23:02]
But like, just sensing is not enough, then you need to treat the data, you need to make something of it, right? To make sense, to react properly to those signals, the market sense you. And all the unicorns, they have integrated in their DNA of the operation, that you can't do that if you separate discovery with delivery of the solution, the try and the do. So that's why they have product teams that are made with several functions: EPDA, engineering, product, designer, data analyst. It's not an option, it's a standard.
[00:23:50]
So all the discovery leaves within the team. They are the ones who lead and combine exploratory work when is needed, interview, focus group, whatever. And experimentation on the product and data analysis. And they do that as a whole team. It's a team sport.
[00:24:13]
It can be messy sometimes, but that's the best way to play it. One flow, one team.
[00:24:22]
If your nervous system atrophies, what's going to happen? You will be keeping delivering, but you will no longer know if it affects or not on the customers and the users and the markets. You will lose sight of market evolution and that might be deadly.
[00:24:46]
Immune system. I was lost.
[00:24:50]
So innovation is risky.
[00:24:54]
Each time you code and ship a new variants, you don't just deliver value, okay? You also take a risk. Why is that? Because you are potentially injecting in your code base instability, anomaly, regression, bugs, you name it, as you want, pathogens.
[00:25:13]
And if your product doesn't have a strong immune system, every new feature that you ship can become lethal. This is not a metaphor.
[00:25:24]
This is really truly how software works and how they break silently first and then all at once.
[00:25:34]
So you want to have that, and that's why unicorns, they heavily invested in elements of this immune system. Trunk-based development to reduce the integration hell. Hmm? Feature flagging, we mentioned that already to control exposure and decouple deploy and release. And also to be able to activate, deactivate through backend without needing to release again something.
[00:26:04]
And you can test in production, just it. Automated testing. To catch issues before user find them, right? Observability to detect anomaly very, very early on. Strong CCD pipeline to ship fast and safe. DevOps culture. To equip and support the overall organization. All of that. This is a must have, this is a standard, this is almost a commodity. Come on. For them at least.
[00:26:41]
But reacting and preventing new variants to hurt you and damage your product is maybe not enough.
[00:26:48]
Sometimes we also need to do some prophylactics. Amazon, for example.
[00:26:55]
They have routines to actively remove every feature, every variant that touches business KPIs.
[00:27:10]
If there is a drop, there is a kill. Regardless of the investment on the feature. Okay? Why is that?
[00:27:20]
And they also have regular reviews on all the features they have, comparing with the data, the usage and so on. If it's not used or have a very low usage, it might be killed. Why is that? Because we don't want to keep in our system things that are not necessary. For there will be rotting, they will they will rotten.
[00:27:47]
And they will make you sick. Pay your tech debt. Pay your design debt. Trim the tree.
[00:27:57]
I would suggest you, one refactor day, keep the doctor at bay. So a robust immune system is vital for fast innovation, for it will prevent you to from you ruin your business and your product out of your good intentions.
[00:28:16]
As we transition to the circulatory system, we will think about how we keep our product alive and streaming.
[00:28:28]
Yeah. Because innovation is not about just launching stuff. It's also to make sure that what we have launched is nurtured to grow.
[00:28:39]
So, first thing first, let's agree that product doesn't live and feed on quarterly reporting. Okay? Reporting won't make you successful. It doesn't have any effect on growth, right? What is important is that you pay on regular attention on ensuring that your product and the people managing it are there are really nurtured. To achieve that, not only we need to connect the dots in the data and connect like hypothesis to effect, hmm? But we also need to make sure that in the organization, the information and the intelligence is shared and flow freely in the in the the company. It means that you will have to connect folks. It means that you will have to connect the flow of information between teams and collaboration is key.
[00:29:47]
Think of all the three pieces I'm going to give you contributing to the bigger picture. First, automate your alerts.
[00:29:57]
All the alerts that pops when there is a drop, when there is a usage, when there is a issue, when there whatever, automate that and push it directly to where is more convenient for your team to find it.
[00:30:13]
Usually, for example, it's a slack channel.
[00:30:17]
So they don't have to chase that. Data comes to them. Then, second, dashboards, they should be available to everyone. Technical dashboard, product dashboard, yes, also business dashboard. How can people work if they don't have access to that information? Should be in self-service, use Tableau, enable your your PMs to do requests, to search for things. It should not be something hidden in some kind of a, I don't know, private Google Drive or some teams there, that doesn't work. And free flow is not just about tools. It's also about behaviors. It's about how you communicate and collaborate and meet and work all together. So try also to set up some rituals for them to work. Have regular review of ongoing experimentation. Perform reality checks.
[00:31:10]
Do performance review, share that widely. At least in the company, within the company.
[00:31:16]
We need to make sure that all those conversation are happening between functions.
[00:31:24]
So we are human, I know that agile, we want to, I say that, break the silos. Okay, this is like the agile motto, break the silos. Honestly, when there is human being, there will be silos. So get over it, okay?
[00:31:39]
Live with it, there will be silos. What we need to make sure is that there is no kingdom. Kingdom is people deciding that they have strong ownership of that piece of things and they regulate that and they control that and they filter that. That you have to chase that aggressively, but silos, get over it.
[00:32:02]
Okay? All of that is what our living creature needs. Service nervous system, we talked about, immune system, circulatory system. Something is missing.
[00:32:18]
Let's open the unicorn brain. Let's have a look in the control room.
[00:32:26]
Ha.
[00:32:30]
All the design comes to, the credit comes to a little girl of 11 years of age using ChatGPT.
[00:32:40]
So, let's see what happens in the control room. I'm going to talk about you what happens in the 21st 4 hours of massive feature failure at Unicorns company. Okay, let's speak to that. You just, you you worked on, you are part of the team, you just rolled out this like very new fancy features, and it has been rolled out to the full user base. Millions of users everywhere in the world, right? Team is super confident. They made some exploratory work, they interview interview users, they stated their hypotheses, they tested the prototypes, they went into pre-test, they launched the variants, they did some very, very serious AB testing, and all the results look very just fine, really.
[00:33:30]
So they decided to scale the experimentation. 1%, 2%, 5%, everyone. Yay.
[00:33:41]
But now, a few days in after the full roll out of the things, yay, numbers start to look a bit odd, right? Weird. Some usage is dropping, we don't like that. Critical KPI start flashing there and we start losing money.
[00:34:03]
This is really bad.
[00:34:06]
This is like damaging us.
[00:34:10]
Welcome to the control room where the crisis are managed. Where the drama escalates.
[00:34:18]
Where leadership gained their salaries.
[00:34:24]
No, first thing first, don't panic. It's bad.
[00:34:30]
But no drama.
[00:34:33]
This is design use case.
[00:34:37]
Okay? It's just part of the experimentation mindset.
[00:34:42]
Sometimes you succeed, sometimes you learn, they say.
[00:34:47]
So no drama. No escalation. No blame game, no salary increase for the leaders. No, no. Kill switch.
[00:35:00]
Just you just have to switch the variants off.
[00:35:05]
That's it. Done.
[00:35:08]
Just minute after the team has decided they're going to switch off because it was harmful, feature was bad, whatever, they don't understand why is that, but they show it's bad, they switch off, case closed.
[00:35:22]
That's the power of having early anomaly detection, the the nervous system we just talked about. You can act fast and safe.
[00:35:35]
So, let's talk about kill switch just for a second because there is one thing very interesting in those, not just technical thing, right?
[00:35:41]
So, when you come to product, are there any product managers there?
[00:35:47]
Hey, love you.
[00:35:50]
So when it comes to product management, there is like two big questions that you have to deal with. Hi stakes. First is like, should we ship it? And then there is the should we kill it, right?
[00:36:04]
What happened is that most of the companies, not unicorns, companies, they kind of very much obsessed on the first one. They are very stressed about that so they make goal-live moments, and you have steer cos, and and you have like you have to write tons of document to say we are going to ship it and put it alive, sometimes it's even plan, you have like plenty of things like that, right?
[00:36:33]
In unicorns, they made the opposite choice. They put themselves on the kill side of the spectrum. And this is a key differentiator. It's how quick can you say, no, kill it and do it.
[00:36:58]
And product led unicorns are really, really good at doing that.
[00:37:03]
That's why every team can launch things. Without a fuss, making it a fuss.
[00:37:10]
Because they're very good at killing it. Because again, the goal here is not to ship perfection, but to really validate whether you what you are shipping is creating good or bad effects and damages. So that's why they created this culture where every team can shut down a feature without having to ask for VP approval, no red tape, no blame game, just fast learning. And this ability to switch off is not just a technical afterthought.
[00:37:46]
Is the core part of the product operating model. Kill switch equals maturity. It's not the fear of failure. You just show that you are capable to fell well and to move on.
[00:38:00]
To the next one. So a few days after. Teams take some moment to kind of reflect. Call that retrospective if you want.
[00:38:13]
Because each team owns their outcome, but they have also the responsibility to learn from what they have done, right? So they ask themselves. What have we learned?
[00:38:26]
What have we measured that was surprising to us? This one is very interesting. What were our beliefs that turned to be completely wrong? And more important, how can I going to use that, are we going to use that to change the next things we are going to do?
[00:38:49]
So it's not an occasional luxury to work like that for them. It's just hygiene. Just the way it is, it works. And by the way, they are quite vocal about that, they have been publishing articles and blog posts and doing some talks, conference and so on, so this is like not brand new. So me, what bothers me here, is why is that then so many company still believe that doing some interviews at the beginning of the project and then ship a tons of feature without testing anything will suffice. I don't get it. So it's not an occasional luxury to work like that for them. It's just hygiene. It's just the way it is. It works. And by the way, they're quite vocal about that. They have been publishing articles and blog posts and doing some talks, conference and so on. So this is like not brand new. So me what bothers me here is why is that then so many company still believe that doing some interviews at the beginning of the project and then ship a tons of feature without testing anything will suffice. I don't get it.
[00:39:35]
Let give me a heads up on the three top three because there are more than three fatal illusions that I see around. And that I believe holds us back. They they all look reasonable, they all look like making sense, but they are not, right? So first one is this one. We do discovery, we talk to users. Yes. We did focus group before launching. Yes. We have user's interview every month. Woo. We have a use research, nice, nice, nice.
[00:40:11]
We collect user feedback. We have NPS, whatever. This is really good, that's good, that's really, really good, keep doing that. But that's not sufficient.
[00:40:23]
Because there is a slightly difference between what user say and what they do. That kind of lives between two different dimensions of the multiverse. They mean well, users, they're not bad persons. But their words don't match their behaviors. If you ask people, what are your favorite shows on Netflix? They try to give you more intellectual things than they do, they don't tell you they are looking at, I don't know, Emily in Paris, for example. Okay? We don't show the crap when we are interview. Humans. So unicorns. I understand that and that's why they don't just ask people what they want, they observe and they cross check with data. Are they really doing that? Because the insights they want to gain are based not from opinion, but from patterns, contradiction, real usage. The best product teams I have seen, they are not searching for validation. They're looking for truth, even though when it's not comfortable for them.
[00:41:38]
Sometimes they don't even need to write a single line of code to test that. You can use the wizard of approach to test if you are going to burn your money for something that is worthy or not. Fake it. Show a new UI, put a human behind, mimic the things, trying to see if people are really using it.
[00:42:05]
You will gain data, that data will nurture your business case. You will take better decisions. See if it's worthy or not to really code the things.
[00:42:18]
Second, we are very great at delivering. We ship a lot and fast.
[00:42:24]
We hear that, we even heard that here, the last two days.
[00:42:29]
We deliver the double in twice the time. reference for a book. We have cut our TTM by two. Yay! We ship every day. Our lead time has been improved by 30%. Good. Perfect. All of that is super good, don't get me wrong on that. But did that change anything regarding the actual humans who are paying you for your features and product and services and that you make money on. Because if not,
[00:43:05]
who cares?
[00:43:08]
Unicorns don't care about delivering features again. They care about shipping things that they will prove them if they're right or wrong on how they understand their market, how they understand how they make money. And they want to do that with the best return on investment. Meaning they want to do that with the less money burn.
[00:43:32]
Less is more.
[00:43:35]
And by the way, that's why MVPs exist, huh? Don't do MVPs by six months, big stuff and so on, huh? You don't get it.
[00:43:46]
So we should have something like that, like each delivery should be attached to explicit hypothesis statement, choose DIBB, choose your own stuff if you want, that doesn't change. And connect that to business KPIs that are part of your business case. Okay? And delivering a lot is not succeeding.
[00:44:09]
Velocity and productivity are totally pointless if you are not bending the business curve.
[00:44:22]
Third one.
[00:44:25]
We have OPRs. I'm so sorry, I see you, I feel you, really.
[00:44:32]
We are, we have a line or our teams with OOKRs. They have all the same. Oh, great. Uh, we have like monthly, weekly checking whatsoever, all delivery plan ties to objective, etcetera, etcetera, well done, really good, well done, well done.
[00:44:47]
Except if your OKRs are just vague intentions and align on outputs. Not learning flow, not the real capacity to test and learn and adjust and abandon.
[00:45:03]
So often OKRs are like that. Takes few seconds to read it. As you can see, it's all about delivery plan and milestones. This is really poorly crafted, you will surely say so. Because none of these key tells us how the user experience is going to improve for new user and how that's going to connect to increased our base of premium users for example.
[00:45:33]
Just before that session, we heard Console Sorgen explaining us why is that we hate so much OKRs. And this very not related to themselves, but how bad we are using them and the system that we create to badly use them.
[00:45:53]
So let's rewrite our very crappy OKR that I just show you in some kind of unicorn language.
[00:46:01]
Bam.
[00:46:03]
Do we see any difference?
[00:46:18]
The difference that we saw is that the way they are formulated create space for team to explore. There are problems to solve. They are not things to implement. And that changes everything, really everything.
[00:46:38]
So we don't want centralized roadmap and micromanagement. We want to have strong alignment on problem we want to solve and the capacity for teams to lose control, I would say, for them to find the best solution on that. And to execute. All those three illusions are very deadly because they are the one who are poisoning us.
[00:47:03]
So what's the way forward, Rachel, because you are a very depressing person. I see that in your eyes. No, I'm not. I have a fancy jacket, remember? So my antidote will be reprogram your pom, product operating model. It's time to think and talk about the things we can do to level up the game. Because yes, there are alternatives. No, you don't have to be Spotify or Airbnb or neither you don't need to mimic them by the way, can we solve that?
[00:47:38]
And you don't need to have Google Google some choose budget to do that neither. Because it's not about implementing stuff, hiring resources or tools whatsoever. It might be even to get rid of some of them, so, just that being said.
[00:47:57]
Let's kill on mids before. You may think that unicorns are unicorns, they have because they have this like magic culture and leadership styles that made them so magical. But truly, Amazon, Google, Spotify, Airbnb, Vinted, although they are all unicorns, they don't live the magic dreamland where everything is nice. In fact, they cover a wide spectrum of cultures and leadership styles and some of them you may call them even toxic. So it's not about that. It's not that much a question of culture and leadership. But the common point they have is this strong business rational behind what they do and how they do that. Because they really understand tech.
[00:49:02]
If your business relies heavily on tech, for example, you're a platform or you are a marketplace or you are a streaming services, it means that tech is the way you make money. It means that tech is your business.
[00:49:23]
It's not a service provider, it's the core of your business. And if that's the case, the way I presented is exactly the way, the best way for you to grow it and sustain it. Period.
[00:49:43]
So solutions.
[00:49:48]
We are going to change the pom.
[00:49:53]
English people, you might not get it, but that's a fun crappy joke. I'm sorry, I'm tired. Let's change the thinking system of the the the company. So here the idea is like to move from I'm a backlog manager to more I'm a learning pipeline logic person kind of things where everything flows for that. So I'm giving you my tips. Use them or not.
[00:50:18]
First one. Yay! Deliver experimentation, do not implement things.
[00:50:27]
The just that first mindset switch, it's going to change a lot. Okay? First one. Easy, this one is easy, just like mindset.
[00:50:39]
Second, we go more and more tricky.
[00:50:45]
Ha.
[00:50:47]
Each feature you are going to push out will have one hypothesis statement that clearly says what is the problem you want to solve that, why is that it is a problem, what are the data you that tells you that, how you made that belief of yours, what are the target metrics you want to change, how's that and a test protocol, okay? I want that. I don't want user stories format or whatever, I want that.
[00:51:17]
Because that's what trully matters.
[00:51:21]
Third. Product teams. Please have product teams consider them as mini labs. Run by EPA, make sure there is engineering and product and design and data analysts all in the same flow. Okay, there is no one chief, they share the burden, they share the responsibility, they share the ownership. They have full access to all the data, they have a clear vision of the problem they want to solve and they have strong ownership to solve them. That will change a lot. Really.
[00:51:58]
And the last one.
[00:52:01]
refuse to scale something if you don't have clear positive signals.
[00:52:08]
I want you to stop delivering stuff just because you have to deliver them, even though it has no effect on the business. Stop doing that. That's nonsense. We produce too much.
[00:52:22]
We need to focus on the things, the right things to ship. Okay?
[00:52:28]
And the last bonus for you.
[00:52:33]
My preferred. Oh, no, there was this one also, I forgot this one. This one is super fun, play anti-roadmap quarterly.
[00:52:43]
This is a quarterly review where you are going to have your teams and PMs and leadership and so on and you gather and instead of having that roadmap review where you add stuff on the pile, right, you remember that one, this one you kill it and then you have that and the question here is what is that we are going to remove and kill.
[00:53:06]
That's super fun to do. And that's super healthy on the system on the product you own. See the conversation that's going to emerge on that, that's going to change the game really, really. So check the cost of the things, how much revenue, how much usage and so on, if there is no value, it should not stay.
[00:53:28]
Bonus tip, this is my bonus, stop the nonsense.
[00:53:34]
So I'm guilty about that because 10 years ago I've I've have written about the dual track of discovery and the delivery, okay, because I was lazy and it was more easy for me to say we are going to do that sequence. This is not good.
[00:53:52]
Delivery and and discovery they have to sit in the same team by the same person. It's not easy, I agree with you, it's complicated, that's why we need to do it and we need to do more. And by the way, stop trading engineers as service providers. Okay?
[00:54:11]
They are not there to just code things.
[00:54:15]
They are smart people. Highly educated people. They're very good in solving issues, in solving problem. In understanding how it comes. And the best teams I've met have engineers from day one in thinking about the customer problems. Engineering are highly creative people.
[00:54:38]
If you get them stuck in just coding, I think you are losing your money for nothing, that's not worth it.
[00:54:52]
So I'm going to leave you now. I think I'm more or less on time, late, on time, I don't know. Nobody cares anymore. Okay, cool.
[00:55:03]
I would like you to think about that.
[00:55:07]
What if in your company you stopped delivering stuff and instead of that you start learning your business. Because this is all about that, you have to learn how you make money. There is nothing wrong about talking about money, about performance, about usage, about market penetration. About rates, about all of that.
[00:55:31]
We need to be better like business acumen, we need to be better at that. All of us, engineering included. Okay? PM first, engineering included. And this is it. So.
[00:55:47]
Thank you to From team to have me here today, I'm really it's really emotional. Thank you to all the great sponsors because they are the ones who make that kind of time and space like for us to be there. Ehm thank you to you to be here with me.
[00:56:08]
My name is Rachel. I'm a very colorful people, as you can see. If you want to be friends with me, you can like QR code that one. And if you want to give me a feedback, please do. It's a first, very first draft of it, it's a draft. I will be super happy, but don't do that with like anonymous form and so on, come and talk to me. Thank you.