Romain Quilici & Virginie Cador
Transcript (Translated)
[00:00:04]
Hello everyone. Thank you for being here for this first conference with Romain and me. We're going to talk to you about the performance framework of product teams at Cdiscount. It's truly a real pleasure for us to share with you our experience feedback in this e-commerce market where, as we all know, adaptability and speed of execution are not just advantages, but above all conditions for survival. Our objective today is therefore to tell you our story of these last three years, but also to present the performance framework we have put in place, while keeping your attention level at its peak, because, as you all know, there's the irresistible call of the post-lunch nap. So it's a real challenge for us to keep you awake. For this, we're going to start by giving the floor to Romain for a presentation of Pixis. Because I think you all know Cdiscount, but Pixis a little less, and he will then move on to a first part that will talk about local performance, and you'll see, it will be as stimulating as a double espresso.
[00:01:11]
Big challenge, we're going to try.
[00:01:17]
Okay, so I think you all know Cdiscount. Well, Pixis is the tech subsidiary of Cdiscount. So in summary, what we do is we build and operate all the digital solutions of the Cdiscount ecosystem. So our playground is quite vast. It goes from B2C activities with the Cdiscount.com website, I think everyone knows it. With its 19 million unique visitors per month. But we also handle B2B activities that come from the expertise the group has acquired over decades. In there, we'll find Octopia, which is the marketplace as a service that provides Cdiscount's marketplace, but which also has its own clients, such as Marjane, Verbaudet, and several others. We also have logistics, which operates all the logistics services for Cdiscount. But which also has its own clients. And finally, our sovereign cybersecurity solution, Baline, which protects the Cdiscount site but also protects services, for example, government services, on a daily basis. So our playing field is vast across all these subsidiaries or these digital solutions, we work on crazy volumes, really. For example, on Black Friday, it's more than 1200 orders per minute that go down into the system. On Octopia's side, we have more than 10,000 offer movements, that is, stock prices from sellers per second, and every day Baline filters more than 1 billion requests.
[00:02:57]
So we have, at Pixis, more than 600 people working. We have experts, we have people who are on the Google Advisory Board. And on top of that, we are always looking for talent. So I know there's a lot of talent at La Flacon. So if you have any questions about our activities, don't hesitate for a second, we'll be happy to answer them. We have quite a few people from Pixis who came.
[00:03:25]
So, I didn't introduce myself, I'm Romain Kilsi and I'm the VPN Engineering for the entire B2B part that I presented to you before, meaning Octopia, the services at C Logistique, and Baline.
[00:03:36]
And I'm Virginie Cador, I'm Head of Transformation and Transversal Project at Belyx since October 2024, and previously Head of Delivery at Pixis for three and a half years. Before getting into the heart of the conference, just a quick note on what we will and will not cover today. So we will mainly talk about engineering value and the associated performance measurement. Engineering value, what do we mean by that? It's actually the entire product manufacturing chain across the different BUs of Cdiscount, through the framework, quality, and associated measurement. We won't talk about AI, even though we know it's a strategic topic for the entire industry, but we've chosen, as we're telling you the story of these last three years, not to mention it, especially since we're also at the beginning of the story on this subject. And finally, product value will also not be addressed in this conference, not because we're not interested in it, quite the contrary, because it's also an indicator that is followed, but we have so many things, you'll see, to tell you about the engineering part that we've chosen not to talk about it. Romain might come next year to talk about product value.
[00:04:44]
So, first point, to give you a little context on how we are organized in what we call our business units. Here, I'm only presenting the engineering part, of course there's the product side. But since we're going to talk a lot about engineering, I'm showing you this organization. So, we start with our teams. Our teams manage one or more products. Each product has its value proposition. These teams are managed by team leads, who have an extremely important role for us. The team leads are responsible for their teams' performance. These teams are grouped into what we call domains. A domain is broadly a set of products grouped by value stream. And at the domain level, we position Head of Engineering, and like the team leads, the Head of Engineering will be responsible for performance at their level, for their teams' performance, for the quality of what is delivered, for the speed of execution of their teams. All of this forms a business unit, as I presented earlier. And so we have 5 business units, which today represent more or less, depending on the season, let's say 60 teams. The first point, and this is the first part of the presentation, is local performance. So what do we call local performance? It's the performance that we will find at the team level. That is, really, how teams, by working on their daily practices, will improve performance, and how we manage to measure this improvement, what indicators we put in place to see that it's progressing, that it's going in the right direction.
[00:06:26]
I'll tell you a little about the history of the last five years. So, at the end of 2020, it so happens that I wasn't there yet. I arrived a few months later. By the way, I'm celebrating my anniversary today of my arrival at Cdiscount. I arrived on April 1, 2021, there you go. Thank you. And overall, we asked ourselves a question. We asked ourselves, are we fast? At the time, we had agile teams, pizza teams. We delivered once a week, which frankly was already a feat, and we managed to synchronize the majority of teams for deliveries.
[00:06:51]
When we turn to our collaborators, ultimately the answer isn't so clear. More than 50% of our developers tell us it's painful to deliver. We have difficulties, we have many obstacles on the way to delivery in production.
[00:07:16]
The Cdiscount site has an availability rate of 96%. And finally, the business tells us that in this competitive retail market, broadly speaking, we still need to accelerate. So at that moment, we set a vision and we tell ourselves that in the coming years, we want to go faster and we want to make our system more stable. On this, we rely on the book Accelerate. Who in the room doesn't know Accelerate? Never heard of Dora metrics? Okay, well a few people, that allows me to continue right away, it's cool.
[00:07:50]
Uh, so in a few words, Accelerate, the book Accelerate, describes a study that was carried out over 3 years on various organizations. Uh, and uh the conclusion of this study is that there is a very strong correlation between the performance of an organization and the performance on four key indicators that we will call the Dora metrics. These metrics are therefore uh the deployment frequency, deployment frequency. The lead time for change, the time to bring committed code to production.
[00:08:24]
The change failure rate, that is, the failure rate in our deliveries, it's really a quality issue. And finally, the mean time to restore, which is, okay, if we have an incident, what is the average time to restore the system? Uh, so it's typically in the target we're in, and so in summary, to put it another way, if an organization performs well on these four metrics, it has a much higher chance of surviving in a competitive market. And there is still a phrase that makes us say that we are going in the right direction, it is in the specific market of retail, it's Excel or Die. Be excellent, or you will die. So we say, okay, we're on the right track, that's exactly what we want to do. Uh, the problem with Accelerate, uh, so I advise you to read the book, those who haven't read it. But for those who have read it, I think we all agree that the book is great. The only problem with this book is that it gives the principles, but with that, once we finish reading the book, we have difficulty acting. We have difficulty transcribing what's written in the book into our context. And so those are the difficulties we had.
[00:09:28]
At that moment, well, we're launching a transformation program at Pixis called Go to Product. A transformation program that is supported, carried, and accompanied by a transversal direction, which we called, or rather, which is called Learn to Scale. So a little nod to Mélanie and her teams who did a great job. And globally, the objective, the objectives of this transformation program are the following.
[00:09:54]
The first thing, as I told you, we don't have the recipe to put it in our context and to accompany the teams to validate the capabilities that allow us to improve on the metrics. And so the first, the first objective, is the accompaniment of the teams. The second is scalability. I talked earlier about 60 teams in all BUs. When we add the platform teams, we are generally beyond 80 teams. So we have a scalability issue. This scalability issue can be resolved by the third point, which is that we want to give autonomy to the teams. In fact, we won't be able to deploy at scale if the teams don't have a certain degree of autonomy in acquiring the capabilities. And finally, the local strategy. That is, to let the teams take the lead to decide the path they want to follow and by what means they want to improve. In the Accelerate book, there are generally 24 capabilities. And well, you can choose to go one way or the other. And generally, the promise is that if we validate these capabilities, we will be better on the Dora metrics. And so, we want to give the teams the possibility to choose their path. So, in terms of implementation, how does that translate? Uh, first thing, we put in place the KPIs, that is, the Dora metrics. We industrialize them and we provide the dashboards to all teams, to all teams so that they can already monitor their starting point and the improvement of their performance as they validate capabilities. The second fundamental point is adaptation. As I told you, Accelerate capabilities are not necessarily self-contained. So what we decide to do is to take the capabilities, for those that are self-contained and very clear, everything is fine. For the others, what we do is we adapt them and we will break them down in our context, we will cut them into much finer and much more actionable meshes for the teams. Always to seek the level of autonomy of the teams. Then, obviously, there's the piloting phase, which means that when we deploy the capabilities, we'll pilot the deployment and see if it works. And finally, the impact measurement. I remind you of the promise of the Accelerate book. You have 24 capabilities. So we have declined, so globally we have declined, in addition to that, 20 core capabilities and improvement capabilities. And the promise is to validate capabilities, you will be better on the four KPIs, if you are better on the four KPIs, you will have a performing organization. That's the way.
[00:12:33]
So, I'm showing you an example of what it means to break down a capability. So there's going to be a small video, I'm not launching it right away because it goes super fast. So I'll have to speed up my speaking rate. But just to describe what we wanted to achieve. Typically in Accelerate, there's a capability called monitoring and observability. The way we broke it down, we broke it down into three capabilities that we call go-to-product capabilities.
[00:12:58]
The first, the first capability, is the monitoring of Accelerate metrics. The second capability, uh, is uh the technical monitoring. That is, in a team, again, these capabilities are at the team level, technical monitoring is how we measure the technical signals of a team's components. To be able to react if there's a problem. We're going to talk about memory consumption, we're going to talk about CPU consumption, we're going to talk about error logs, etc. Technical signals. And the last capability,
[00:13:30]
So, that we broke down, is what we call functional monitoring, which is how we manage to supervise the system but on the business axis. How we graph business functions in our system so that the teams, once again, are able to react much faster if the indicator degrades.
[00:13:52]
On the previous point, what I told you is that we want autonomous teams. So we need to have actionable capabilities. Having actionable capabilities means having very precise descriptions. So I have the video, so I'm going to comment on it.
[00:14:05]
So on the left, you see all the capabilities that we have redefined at Pixis.
[00:14:12]
I'm going to take the part, so each capability has a title, so in this case, functional monitoring. We describe a bit, we give a bit of detail with the owner of the capability and who it concerns. We put an explanatory video so that the team can get on board autonomously. We set the objectives of the capability. We give some context, so in this case we're talking about functional monitoring, so we're going to describe what a critical business function is and how to monitor it. The expected gains following the validation of this capability.
[00:14:45]
The indicator that is impacted, in this case it's the MTTR. And a whole series of toolboxes, of training, so that the team can once again take this capability, validate it autonomously. Then, we will describe what the acquisition criteria are. When do we consider that the capability is validated in a team? So we give a set of criteria with musts, coulds, and shoulds. So all the musts must be validated. And finally, we give the validation methods, that is, with whom to work and with whom to pass this validation step of the capabilities.
[00:15:14]
And finally, we give the validation modalities, that is, with whom to work and with whom to pass this validation step of the capacities.
[00:15:22]
The summary of all this is that, so, we have Accelerate capabilities that are not necessarily actionable. We take most of these capabilities, we break them down again in our system to make them, to have very precise descriptions and to make them actionable in the teams and above all to have this level of autonomy of the teams so that they can, in order to be able to achieve this scaling.
[00:15:50]
How does it work then? So here we launch the deployment. So the deployment of Accelerate capabilities plus those we reviewed were launched in, so in 2021. So that's the bottom of the curve that you have on the left. And what we can observe is that we have a rather meteoric acceleration in the validation of capabilities between 2022 and 2023. So, what's happening? From the moment, in fact, that we provide this help, uh, this, well, the capabilities that have been redefined, when we provide them to the teams, in fact, the teams start autonomously. And there's something that changes, which is that we move from a mode where we say we want to go there.
[00:16:35]
You remember, speed, stability, the vision we set. The hierarchy, well, the management pushes that and something happens, which is that we change direction completely. The teams, they see that tools are made available to them. By using these tools, they see that their performance improves. It's quite obvious. that when, for example, we have functional monitoring on a critical function and we are able to react 10 times faster because the alert rings at the right time, because the signals are good, well, as a result, the team, well, its performance improves daily. And as a result, it changes the paradigm, meaning that the teams, well, they become, they actually drive the transformation. This is what we call moving from a push mode where, globally, management says we want to go there and we encourage teams to go there, and that's the beginning, to a pull mode where teams, in fact, totally autonomously, drive the transformation and see their performance improve. This is one of the fundamental points of this program's success. Uh, we were talking about it this morning in the first, in the keynote, but the fact that the teams are owners is fundamental in a transformation program. It's one of the successes. And that's what made it so that globally today, so we're not at the end yet, but today, we estimate that on average, teams have acquired 65% of the capabilities. How does all this translate?
[00:17:58]
So here, I'm just going to take a few seconds to describe it to you. So these are the dashboards we use to measure the four KPIs, the Dora metrics, at the smallest scale first, which is the local scale, so the team scale. After that, we'll see an aggregation, but the first point is at the scale of a team.
[00:18:18]
So very quickly, do you see my pointer? Yeah. Okay. So here we're talking about deployment frequency, deployment frequency. The blue curve here is the evolution of the metric over time. So this team, which is the pricing team, I don't know if the pricing manager is here, but in any case, it's a little nod because his team is performing super well. This team delivered 32 times in Q2 2023, per quarter, 32 times per quarter. Today, it delivers 86 times. Okay? So that's the evolution of the blue curve. The green curves, well, the horizontal curves, are the accelerate thresholds. This is the elite threshold, this is the high threshold, the medium threshold, and the low threshold. And the vertical bars there are broadly the capabilities, the progression in acquired capabilities that impacts this metric. Earlier, we saw that functional monitoring was fundamentally impacted, it's the ability to restore the system. So what we've done is we've broadly mapped capabilities to Dora metrics, and what we're measuring there is that as teams validate the capabilities that impact this metric, we actually look at whether the teams are improving. That's the goal, right?
[00:19:33]
We have capabilities, we say, once you have the capabilities, your performance will improve and so we follow it. So this team is doing super well. Because we see that on all four, I'm not going to detail all four, but broadly over time, all performances globally improve. The MTTR, 33 days on average to restore production. Today, we are at 5 days.
[00:20:03]
So here we are at the team level, and this is really the dream team.
[00:20:07]
I did some pick and choose. Obviously, it's not the same everywhere, it would be lying to you to say that everything works well. And here, it's a bit, well, I don't call them catastrophic teams, by the way, I haven't put the names of the teams, but it doesn't work at all. That is to say, here we have cases where finally the deployment frequency was good two years ago. And globally, the curve is not at all going in the right direction. And that, for the different indicators. Fortunately, it's not the same team that has these four indicators, but it's for all this to say that, well, capabilities are good, but it's not enough. That is to say, there are also necessarily problems that can emerge in the daily life of teams, such as turnover, such as a team that is growing because, well, we want to accelerate, or a team that is decreasing because budgets are decreasing, or departures, or team mergers. Finally, there can be quite a few external elements that also come to disrupt the performance of a team. I think I'm not teaching you anything on that point, but there you go. All this to say that we have a model, we estimate that it works when there are not many disturbances, there can be disturbances that make this model jam a little.
[00:21:10]
I presented the team level, these are examples. At our level, we obviously have to look further, and what we look at is a bit the consolidated over all the teams. Have we, globally, at Pixis, across all BUs, improved? And similarly, I will now detail the four indicators. So the first, it's about the deployment frequency, where we are at a level, at a level therefore wider than a team, it's all the teams of the BU. I've put two graphs because you will have noted on our data, on our Dora metrics, we only have a 2-year history. So this first graph is globally the evolution of the number of releases and the workforce. What you see in orange there is the number of staff at Pixis, and what you see there is the evolution of the number of deliveries. What does this curve say? This curve says that between 2021 and 2022, we have a very strong acceleration in the number of deliveries. Mainly due to three factors. The Go-to-Product program is launched, the teams acquire the capabilities and they improve, a factor of performance improvement.
[00:22:16]
Well, we are more numerous, so necessarily when we are more numerous, logically we deliver more. And at that moment, there is also B2B that starts, Virginie will talk about it later.
[00:22:23]
If you leave me some time. Pardon. If you leave me some time.
[00:22:27]
If you leave me some time, sorry. And the B2B globally, well, we had, we had strong demands from the client side, and so necessarily that led us, well, to deliver more, to deliver faster. What you need to remember from this slide is that between here and there, we deliver 30% more. So we have a real improvement in performance on the number of releases. When I take the accelerate indicator, which starts, let's say, on T2, so this stabilization phase, we see that it is globally stable, but at very high levels. So here we are at the level of the entire BU. So clearly, it's a level of performance that we're satisfied with, we can always do better, but across all teams, we estimate that the level, that the level is good. If I now go down into the four indicators, so deployment frequency, I've already talked about it.
[00:23:17]
When we look at the LTC, well, for the LTC, we see that the curve, it is, therefore the blue curve which is the indicator, it is descending. Again, I'm talking about more than 60 aggregated teams, so to bring this curve down, it really takes the teams, well, a real, a real impulse, a lot of teams that improve their performance. And we see that we are not in the good curves, we are between the low and the and the medium, so we still have room for improvement, but globally the trend is very good, so we consider that we are improving. On the CFR, we were already good and globally it remains stable. And on the MTTR, in the same way as the team that was working well. Well, we go from, well, we are on a curve of quite strong improvement, that is to say that our teams, globally, are able to put the system back into production much faster than two years ago. So these elements actually prove to us that the acquisition of capabilities still allows, well, allows the progression of teams.
[00:24:17]
This is one of our key learnings. That is to say that we launched in 2020 our Go-to-Product program with the adaptation of accelerate capabilities and the support of teams in the acquisition of these capabilities. The result four years later, if you remember the first slide, we delivered once a week, today we deliver into production every 7 minutes and globally the teams are completely autonomous to deliver.
[00:24:47]
We no longer have frozen zones, that is to say that before, close to commercial events such as sales, Black Friday, etc., well, we froze deliveries, so there were periods of 2 months where there were no more deliveries in production. That's over, now we deliver the day before Black Friday and we do it without fear. So we still avoid delivering on Friday, because we still like to have good weekends. But that's a rule that's a bit unofficial, we could say. And finally, we reach a level of availability on all digital solutions of more than 99%. We go from 96% to 99%, so in fact, it's not just the KPIs that improve. There is also the business impact, on Cdiscount, minutes of unavailability, it's several tens of thousands of euros lost. So when we gain three points of additional availability, in fact, we also serve the business. It's not just KPIs that please us in tech, it's that we serve the business in our system, we are more stable.
[00:25:45]
So on our key learnings, so what worked well, Go-to-Product. Obviously, the steering and the measurement and all the data that goes with it, what I showed you. I'm sorry, it's quite dense and then Virginie is rushing me a bit, so I'm speeding up. But, in fact, this data is fundamental. And I've already told you, so the pull versus the push, the teams take ownership of the transformation. It's a game changer in the way the transformation has operated and has accelerated.
[00:26:15]
Quickly, the hard points, because again, you've seen there are teams where it works less well. The duration, well, we were aiming for 2 years, we had said 2022-2023, we see that we are only at 65% of the capabilities, so we still have work to do and therefore it gets longer. Measurement biases, I also talked about it with some with a colleague last night from Le Bon Coin, it's a bit the friction with the teams. Because obviously, we ask the teams, when I showed you earlier graphs that are degrading, what we do is we ask the team lead, it's their role, we ask them to understand why the team's performance is degrading, what's happening? Well, when the measurement is not right, it obviously creates a bit of friction.
[00:26:54]
The slowdown in the capability validation curve, globally, we had a big acceleration in 2022-2023, the curve settles in 2024.
[00:27:06]
The choice of capabilities and their impacts. That is to say that we have declined capabilities and yet the impact we had imagined in the teams is not as great as what we imagined. Uh, we also let the teams choose their own path. Well, sometimes that frustrates us a bit because there are capabilities, for me, functional monitoring is the basis. That is to say, I believe that my teams must know when there is a problem in production and be able to restore it quickly. Well, as we left the choice to the teams, there are teams that said, well, functional monitoring, we'll do it later. Well, there you go, that, so I'm sitting on it, but it's, it remains, it remains in terms of impact, small frustrations. Giving meaning, constantly, on a program that lasts so long, you have to give meaning to the teams and remind them what it brings them. And the disturbances, I've already talked about them.
[00:27:56]
You have 30 seconds left.
[00:27:57]
30 seconds. Okay.
[00:27:59]
The Lean shift, globally, 2024, we're making the Lean shift. Why the Lean shift?
[00:28:06]
I showed you that in fact we go fast, we are more stable.
[00:28:11]
Uh, our KPIs are progressing, we impact the business, but we are missing, we are missing a path to follow. When we ask the teams, there are still frictions at the local level of the teams that the capabilities that we have put in place cannot necessarily solve. They remain tools available. But suddenly, we estimate that the teams must work on their resolution, their problem resolution, local, that they are more oriented towards the customer. And that we enter a system of permanent learning so that our teams are better. And globally, it's the promise, it's the promise of Lean. So Lean was started at the end of 2024, so we have very little hindsight, but it's starting to deploy and generalize with us in the teams with really great promises. In summary, in 10 seconds.
[00:28:55]
The local performance with us, at Pixis, is measured through accelerate KPIs. It has been supported for 3 years by all the capabilities to make the teams progress. And today, we are tackling the Lean shift, it's another angle of view, a new catalyst to go and improve these performances.
[00:29:13]
And I'm handing over to Virginie.
[00:29:14]
He took all my time. I'm sorry.
[00:29:16]
No, no, it's fine. Go ahead.
[00:29:17]
Okay, so now I'm going to talk to you about performance, global performance. Romain mentioned Octopia previously, so the BU that mainly carries B2B within the Cdiscount group, and to achieve the objectives associated with B2B, namely to quickly deliver value while respecting contractual commitments. Uh, we were forced to implement a global performance framework. And that's what I'm going to present to you right after, around certain indicators and the evolution of these indicators. I'm not going to address here the fact that this global performance framework, we started with B2B because of the challenges we had on this subject. And it is being deployed or even deployed across all BUs, in fact, of the Cdiscount group, which represents a rather enormous volume. Before going into the indicators, just a little context, I think it's extremely important. For more than 20 years, Cdiscount has been self-centered on its B2C activities, developing a powerful and robust tech platform to face the various challenges, particularly volume, that Romain mentioned previously. Based on this experience, Cdiscount decided to launch its B2B platform. Octopia, so Marketplace as a Service. The notion of B2B, well, it brings about a paradigm shift. Indeed, it's not enough to say that we're moving from B2C to B2B and that everything is going well. No, it also includes an evolution of the organizational mindset, particularly around customer commitments.
[00:30:57]
Just to quickly illustrate what I just mentioned about customer commitments. When I arrived at Cdiscount in mid-2021, Christophe Sanson, the CEO of Pixis, told me, 'Listen, Virginie, your mission is simple: you must secure, secure, sorry, all the commitments that are made by the Pixis teams, both internally and externally.' So, based on this mission that was entrusted to me, I went around the teams. I have to be careful what I say, as some are in the room. The answer was almost unanimous and also revealing of what I'm going to tell you, namely, 'No, Virginie, we don't have formalized commitments.' And above all, today's delays have very little impact. So that's where we started in terms of mindset, and that was one of the first challenges for us: to make people understand this notion of customer commitment compared to what we knew before.
[00:31:46]
Based on these commitments, the second challenge was the business, because when we launch a product like Octopia, we are necessarily going to look for business. And so we signed clients. And one of Octopia's first clients is Marjane. Marjane is a large Moroccan retailer who already had their platform and decided to launch in the marketplace, the first in Morocco. Marjane had enormous challenges on their side, clearly they were expected at the turn because some had tried and failed. So Marjane launched with Octopia, a great pride for us at the time. Except that it forced us to pivot very quickly towards a much more increased steering of the landing of certain features, the theme of our product, compared to what we could do before on our roadmap. So that's the global context. Second element of context, it's the framework. From 2020 to mid-2022, at the launch of Octopia, the entire Octopia program was organized in SAFe. At the beginning of 2022, we made the choice to switch to an in-house framework called the working within Cdiscount. This in-house framework incorporates the entire product flow, I think as a classic sum with four main phases: the strategy phase, the discovery phase, the delivery phase, and the learning phase. Associated with this framework, we also address the notion of standardization, so standardization of the different workflows at the level of all the objects that make up this framework. You will see later, we will talk about themes and epics, themes being business ambitions in the medium term with an investment that are made, which themselves are composed of epics, and epics are value increments that have a user impact at the level of our product. So we standardize the whole, we standardize everything that happens at the team level, and we also set up adapted governance. So what's important is to understand that this framework corresponded to our context. That doesn't mean it will work elsewhere, etc., but we adapted it to the context and to what we wanted to do. So adapted governance.
[00:33:53]
So that's for the global B2B context. We set the framework, and then, I'm going to rely on a phrase from Christophe, who always says, 'Without measurement, it's all just opinion.' I love this phrase. Because effectively, when we don't have measurements that are associated with the framework we've established.
[00:34:08]
Uh, we are on subjective elements, opinions that depend so much on individuals and the interpretations of each person. So we put in place the measurement. On global performance. And in fact, with the implementation of these measures, we were able to start defining targeted actions with concrete impacts, and these measures allowed both the teams to work on their workflow, but also us within the organization, in fact, to work, to continue to improve our framework over time.
[00:34:38]
In the rest of the conference, I'm going to present to you the different indicators that we have put in place. So the notion of predictability, we'll come back to it later, huh. The notion of re-prioritization, the re-prioritization rate.
[00:34:51]
The management of our flow, customer commitments, and time to market. So before looking at all the indicators, just an element of context also on the means. The means that have been deployed, so the teams of Mélanie sur l' to scale contributed a lot to the capacity part, the definition of the framework was done by Samuel, I don't know if he's here.
[00:35:13]
in co-construction, and I took care of all the implementation of all the indicators, the deployment in the teams, giving meaning, etc. And we relied on Jira Software, we didn't have the means to buy another product on top to aggregate all the indicators, so we docked it to EasyBI and we did with it, and it worked pretty well, because we knew where we wanted to go.
[00:35:37]
So the first, measurement of the reliability of the roadmap. So what do we look at? We look at a period that ultimately corresponds to the NOW of a roadmap. NOW Next Letter, so over the next three months, quarter. What do we intend to do at the level of a BU and what do we actually release? And we exclude from this calculation the themes that are deprioritized during the quarter.
[00:36:00]
First measurement of the indicator, so we started to deploy this framework in Q3 2022. First measurement of the indicator. We commit to 50 themes, remember, theme, business ambition, medium term. Out of 50, we release 40%. When we release this first indicator, I might as well tell you that it makes everyone grind their teeth, with the organization, with everyone. But we realize where we're starting from and the work and the journey we still have to make. So it allows us to realize the state we are in with respect to this indicator, and above all, to validate the fact that the framework contributes to the improvement of the indicator. 2023, well, we work with the teams.
[00:36:43]
We first work on the incoming flow.
[00:36:46]
The pointer. Here, the pointer.
[00:36:50]
So we work on the incoming flow. We go from 50 themes to 36. So we didn't just reduce, we worked on the quality of what we committed, both in terms of value, of increment breakdown, etc. And we go from 50 to 36.
[00:37:05]
And we arrive, by looking at the indicator, to go from 40 to 54%. Already, finally, a progress. We ask ourselves what our target is. Okay, we're going to set a target of 80%, which seemed reasonable to us compared to what we had observed on the ground and what we wanted to achieve. And from Q2 2023, well, we go to 80%. You'll see that it has a link with client commitments, I'll come back to it later, but we maintain this target and we achieve it over all of 2023. So the fact of starting to observe, I'm not going to teach you anything, but we look, we set the indicator, we observe, we know where we want to go, and then we act on it, not alone, we act, finally the teams also act on it. Second indicator, the measurement of the global predictability of the teams. So the number of points globally on the program that are planned in each sprint of the teams versus the points delivered at the end of each sprint. Again, it's a global measurement, that's important.
[00:38:02]
Similarly, Q4 2022, we have an 80% predictability rate, which is pretty good, we're not going to hide it. And that's notably due to the implementation of the accelerate capabilities, which help the teams to better control their flow and their sprints. And a number of points delivered of 2700, this doesn't matter, it just served as a benchmark for us. Based on this observation, we tell ourselves, okay, we are a bit ambitious, we are going to push the thing a little and we are going to set a target of 90% compared to the challenges we had. So here you see the evolution over 2023, we went from 80, Q1 84% until reaching the target of 90%. Now you'll tell me, well, in fact, your teams have just under-committed and then boom, it's settled.
[00:38:48]
Well, we may have had a bit of this phenomenon, but we worked again, we tried to give meaning to all this, and we continued to observe the number of points delivered with constant teams. And we went from 2700 to almost 3000 in Q1 and we went up to 3154. I'm not saying that everything is perfect, of course there are biases, Romain talked about it. But nevertheless, again, by setting the indicator and working on the target, we made everyone aware that we needed to perform a little better on this subject as well.
[00:39:21]
The re-prioritization rate. So, uh, we started looking at it at the beginning of 2024 because before we were mainly on the other two indicators, and not only. Uh, and well, one of the pitfalls on the program is, 'Oh, but we don't know what we're going to do, it keeps coming in and out,' etc. So, well, let's go, we measure it.
[00:39:40]
So Q1 2023, uh, 8% of deprioritization, Q2 6%, which is pretty good. This is explained by the fact that, in fact, over these two quarters, we were largely driven by customer commitments. That is to say, topics that absolutely had to land on these quarters, hence the low re-prioritization rate. Q3 2023, we move a bit out of this mode and we re-appropriate our roadmap, and there, suddenly, we have to ask ourselves the right questions: what impacts are we going to look for at the level of the Octopia product, etc. And we see that it makes us wobble a bit since we go to 28%. So we're looking for ourselves a bit and then little by little. We rework the indicator and we bring it back down. We didn't set a target because it's extremely complicated, well, it seemed complicated to us to set a target for this rate. However, we were in constant observation to try to understand it.
[00:40:29]
Customer commitments.
[00:39:44]
100%, which is pretty good. This is explained by the fact that actually over these two quarters, we were enormously driven by client commitments. So, that is to say, issues that absolutely had to land on these quarters. Hence the low re-prioritization rate. Q3 2023, we're coming out of this mode a bit and re-appropriating our roadmap, and there, as a result, we have to ask ourselves the right questions, what impacts are we going to look for at the Octopia product level, etc., and we see that it makes us wobble a bit since we go to 28%. So we look a bit and then little by little, we rework the indicator and we bring it back down. We didn't set a target, because it's extremely complicated, well, it seemed complicated to us to put a target on this rate. On the other hand, we were under constant observation to try to understand it. Client commitments, before correlating all that. So it's very simple, the difference between a roadmap and a client commitment is that the roadmap, it arrives in the quarter, a client commitment is a date. And we don't have the right to exceed that date. 100%, the objective was 100% of dates met regarding the contractual commitments we had.
[00:40:52]
So we've been pretty good there. In Q1 2023, we had 12 themes to deliver, we delivered 100%. Q2 2023, 16, which took up almost half of our roadmap, we did 80%. So we weren't at the target. However, with everything we put in place, it allowed us - this might make some people react - but it just, in inverted commas, allowed us to prevent and anticipate our delays, and to announce this to the clients, and therefore to really be in a co-construction relationship with them.
[00:41:19]
versus, hey, we're announcing at the last minute that the feature you've been waiting for for months won't be delivered. So it had this virtue.
[00:41:27]
Q3 2023, we were a bit more in trouble, it was the summer period, well, that's it, we had a few delays and then Q4, we went back to 100%. So what's interesting is to correlate all these indicators. If we look again at the period from the launch of Q4 2022 to Q4 2023, in fact, we see a constant increase in all indicators.
[00:41:50]
So you have in green the predictability of the teams, the number of points in yellow, the reliability of the roadmap in blue, and then the deprioritization rate in red. So beyond the constant evolution of the indicators, we also see that when we deprioritize, it has a direct impact on the reliability of the roadmap. However, with the framework put in place and the fact that the teams generally know where they're going, the impact remains low, it remains low. So in one year, we gained 42% in roadmap reliability. We could say we started a bit far at the time, but nonetheless it's a good result. I remind you that we are at scale, we are not talking about two teams. Plus 17% of points delivered with constant teams, and a team predictability also of more than 10%. If we now look at the period end 2022 to 2024. So what's interesting, So here, I've just detailed from here to there, and then 2024 arrives. So Q1 2024, we remain on a cruising rhythm.
[00:42:52]
In inverted commas, everyone understands where we want to go, we even continue to increase performance. And then there, as you can see, Q2 2024, a little, a little glitch, in inverted commas, but in fact it's explained quite simply. Early 2024, change of shareholder at the Cdiscount group.
[00:43:09]
That, that happens, it's part of corporate life. Who says change of shareholder, also says... change of strategy at the level of the different products. And so, direct impact, boom, on the reprioritization of Q2 2024. We started with a roadmap, the strategic choice was made to reorient the Octopia roadmap towards the Cdiscount BU, direct impact. So 42% of deprioritization. So 42% of deprio, 50% of roadmap kept. However, we see that the teams continued to deliver.
[00:43:39]
And there, for once, they delivered but not necessarily a feature that was delivered in terms of value at the end of the quarter. And the fact that the framework has not changed, in fact, we see that in two quarters, we find the expected performance levels. And that's clear, in fact, if we hadn't had this framework. I am convinced that from there, it would have continued to... we would have continued to be in trouble.
[00:44:04]
So here we have talked about performance indicators, since the beginning we say we want to accelerate, we haven't talked about it. So we're going to finish on the time to market part. So what is time to market at Cdiscount? It's the number of working days between the beginning of the discovery phase and the end of the delivery phase. Before going into, finally, looking at the time to market figures, I'm just going to talk for one minute about the flow. Because it's an important element and it has a direct impact on the time to market. So here, you find the two notions of objects that we manipulate.
[00:44:40]
within the framework, namely the themes and the epics. Before looking at time to market, it took us about a year to observe our flow, in fact. The flow of these different objects. So here you have the beginning of 2021 and here we go until mid-2024. What do we see? Both on the epics and on the themes, it's that we tended strongly to deliver our value at the end of the quarter, okay? It's a fact and the graphs represent it. And little by little, we worked with the teams on this notion of flow, of WIP. I think we've already talked about it in the conferences, so WIP, Work In Progress. And we said to ourselves, okay, we want to deliver value continuously. We don't want to deliver value at the end of a quarter. We want to be able to deliver value every week or every two weeks. And that's what you observe on the graph. By dint of observing these indicators, we actually managed to shift from a somewhat batch mode to a flow mode. You can clearly see it here. This is 2024.
[00:45:43]
Both on the themes and on the epics.
[00:45:47]
So the impact on TTM, well, we also see it, because we have an average TTM on the themes. We went from the beginning of 2024 to 190 working days, to the end of 2024 to 139 working days. So that's about 4 and a half months. So already we have a strong decrease in this TTM on, once again, increments of value with a fairly strong business ambition. Knowing that the TTM of the epics, I turn to Romain, is about 40 days. Thank you.
[00:46:19]
So there you go, so we also managed to accelerate and that was one of the strong objectives. Objective for end of 2025, to go even further, 30%, to go and get 30% of TTM more. So the key learnings.
[00:46:35]
Uh, constant observation, that's what I was saying, finally, if we don't look, in fact, we don't see, simply. It's a bit like what we said this morning about Kaizen.
[00:46:48]
Uh, so constant observation, we weren't looking every three months. At the beginning, we started looking every week with the teams. The analyzes, well, we did a lot of work. Afterwards, we, we broadened a bit. But it continues. Standardization, we can't do that if we don't standardize. If in fact what we expect from each other, the gestures, in inverted commas, are not standardized. So we had a lot of work on that. The client-oriented mindset. We started with that because there was a strong issue and it might seem simple but I guarantee it was complicated to switch. The One Team. Here we are talking about engineering, but there are product people in the room and in fact, without engineering and product together and all BUs, we can't do it. It's, finally, it's extremely important. Sponsorship, it came, well, there was sponsoring at all levels of management, at all levels of the company. And that's also key, otherwise we won't get there. Finally, very clearly, we won't get there. We presented our reports every quarter, we didn't lie to ourselves, we looked at what worked, what didn't work, we implemented continuous improvement loops. And that's what made us perform so quickly in the indicators. And well, the adapted governance and the improvement loop, I just talked about it.
[00:48:06]
The summary, sorry. So, in summary on overall performance, the framework, standardization, the KPIs that I presented to you, and especially an organization that learns. In fact, the look, it learns through what happens locally, but also in the global mindset.
[00:48:21]
If you don't have an organization that questions itself, that is capable of continuously improving, you can't do what we did. Finally, it's not possible.
[00:48:31]
Thanks for catching me up. Virginie.
[00:48:34]
Uh, in summary, we know we've given you a lot of information, but in fact we've told you 3 years of our life, of team progression.
[00:48:42]
Uh, so in summary, we had a period where we wanted to go faster and stabilize the system. We are entering the period where we are doing our best with Lean.
[00:48:55]
And all that Virginie has just presented, the flow monitoring indicators, help us to master all of this. How do we measure performance at the local level? We use the Accelerate KPIs. At the global level of the organization, we use the six indicators that Virginie has just presented.
[00:49:17]
We'll stop there. We really thank our teams because it's their work that we came to present, and if they hadn't performed so well, well, we wouldn't have had, or at least, the slides would have been harder to defend, we could say, and maybe we wouldn't even have been accepted at Flokone. So we thank our teams and, well, we have I don't know, 5, 7 minutes left for questions. Thank you.
[00:49:40]
Before the questions, I just allow myself a thank you to our two companies who allowed us to be on stage today, namely Pixies. So there you go, if there are any representatives, thank you, thank you, it was great.
[00:49:53]
Go for the questions.
[00:49:56]
Thank you for the presentation. So I understood that you accelerated the flow. I would be quite curious to know if you were able to make a correlation with the business results on one side, and then the satisfaction of the teams on the other side.
[00:50:12]
Uh, already all the stability part necessarily has a business impact. As I told you when it's for MBF, we are 96% available or there is no incident. In fact, it changes the copy below of the number of euros gained by the company. So there is a real business impact, that's the first point.
[00:50:30]
Uh, the mastery that Virginie presented also allowed us to have business satisfaction, especially in the B2B framework, where clients were expecting us. That is, we really had a difficult phase where we were building the product, but at the same time, we were expected by clients, and the fact of being in control of everything we were delivering, of keeping our promises, had a real impact on client satisfaction and therefore on the whole business which also had confidence in us. That is, globally, we were keeping our commitments.
[00:51:01]
Of how much?
[00:51:03]
Of how much?
[00:51:04]
Uh, to correlate in terms of exact figures on the amount at the end, I wouldn't be able to answer like that. We know the availability because we know how to correlate one minute of unavailability, how much loss of revenue it generates at the end. So there we can correlate very well. I can't talk about it much more, these are somewhat confidential figures. But once again, we're talking about large amounts. However, the correlation between going faster and a business gain, I can't say.
[00:51:32]
Oh, geez.
[00:51:33]
Yes, I have the microphone.
[00:51:36]
Uh, thank you very much, very impressive because making a structure like that move in 3 years, bravo! And I, my question is...
[00:51:41]
In two, at the base.
[00:51:44]
Uh, in fact, what kind of support did you benefit from? That is to say, at one point, among you, you say, okay, accelerated awareness, DORA metrics, implementation of capabilities, but what I see is that in my organization, first, raising awareness among people, DORA metrics.
[00:52:02]
What are the capabilities, explain them, write everything you've shown, it's magnificent, your, but you have to write it all down, etc. Who did it? Internal teams? Who?
[00:52:11]
Uh, already the first point, and I was discussing it with my ex-colleagues from Pôle Emploi, sponsorship. It must be a strategic decision and supported with strong sponsorship. That is, for us, it came from our DSI, Christophe, who, well, we said, that's where we want to go. So that's the first point. And the second point, so there is a transversal direction called the Learn to Scale direction, which uh, we're talking about ten people, so it's not enormous, and these ten people, in fact, they wrote the sheets, they accompanied, but we also mainly used the organization. That is, we had appointed bearers in the organization, we had head of engineering who were bearers of capacity and who came to accompany the teams. So in summary, we had the transversal direction which could, which, which supported, which accompanied, there were also coaches in there, but we're talking about ten people.
[00:53:02]
who wrote the sheets etc. and alongside that, it's the organization that also supported with key people who were bearers of capacity.
[00:53:09]
And if I complete on overall performance, we made the choice, notably with Romain, to bet that it was the organization that was going to take over everything we did. So it asked us for a lot of energy to be there daily. But we didn't add any extra cross-functional positions, and it was really a will for the organization, once it understood, to take it on and for it to work. So, we weren't always in agreement. Finally, there was a lot of discussion, but it was really a choice and I think that for once, it was the right choice.
[00:53:46]
The four or five DORA metrics, they are a bit of the lagging type, they take time to evolve as we press on the twenty, 24, 28, I never know, capabilities. Is it, to try to manage that a bit more finely, you used measures that maybe, well, that update faster, that are, in fact, observable faster or not?
[00:54:07]
Uh, no. No, we really measured. So we gave ourselves time. So once again, we had the ambition of 2 years, but that's also why finally 4 years later, we are still there. is that we kept the capacity mesh and we forced ourselves to keep these metrics. And moreover, you can see that the DORA metrics have evolved, the thresholds have evolved. We stayed on the old version in fact because it's our working framework, that's how the teams were measured, well, measured themselves and the benchmark values, so we stayed on that. But indeed, between the validation time of a capacity, which can already take, we're talking months, to validate it, and the impact that this capacity has on its performance, we can talk about several months, indeed. That's why we also waited a lot before doing this type of presentation. Today we have a 2-year hindsight, and in fact that's what allows us to speak. Indeed, in 6 months, it's difficult to see any impact.
[00:55:00]
After the teams, they look at the indicators, at the end of the sprint, they observe beyond the capacities anyway, finally.
[00:55:09]
Thank you Romain and Virginie for this session. We unfortunately won't have time for other questions.
[00:55:14]
Will you stay with us today and maybe tomorrow too? Yes. Well, maybe you can then come and meet Romain and Virginie if you have more questions. Thank you very much for coming, thank you for your session.
[00:55:25]
Thank you all.