Yannick Quenec'hdu

Transcript (Translated)

Let me introduce myself, my name is Yannick Hennegueddu, as my name suggests, I am Breton.
A little history, in 2011, I started a project at DCNS, a nuclear submarine project. Initially, my associate was called to see if we had a solution for a project that wasn't working, which was using the V-model method. And we were asked to come. I didn't have much experience with Kanban. I had read a bit and thought that Scrum wouldn't be suitable given the context. And we did our first experiment with Kanban. Following that, I gave a presentation at Scrum Day in 2011. I was following a bit of what was happening on Twitter at that time. I had received a lot of comments from people who said that... Because we had highlighted what we call predictability. And many people said it was a bit idealistic, even a beginner sorcerer's technique. So since 2011, I have been trying to build an offering around a wave of support that could implement predictability and show that it works. So for the past two years, we have been essentially working on this. After DCNS, I went to Warsaw to work for LVMH to implement this. And I joined the company Xebia a year ago to help them build an offering around Kanban. And this is somewhat what I will present to you through this, how we ultimately built what we call a wave, which is the ability to build a team so that it becomes self-organized around Kanban methods. In addition, to ultimately satisfy the business and management, we worked on indicators and predictability. And this is a feedback experience on this wave that I will present to you.
So the initial context is an agile transformation with a Kanban orientation under an 8-week support wave. We told ourselves, or rather, we can now shorten it to about 6 to 8 weeks to take a team, a small team, often about ten people, and transition them to Kanban. In this context, it is to give you an idea of what we are doing. We set up a pilot in 2013 with a very large client, the world's number one in insurance, and we started on two projects. So about thirty people, it worked very well, the pilot had a positive buzz internally, and at that point, we moved to an initialization phase on which we are currently working with five coaches. And we are transforming 10 projects in real time. That's about a hundred people. And then, our focus is to transition 80 other projects to this method, about 400 to 500 people. In addition to this, there is this buzz and even beyond this client, several other companies, including famous names from the Internet, are interested, well-known French names are very interested, and we are starting to do pilots with them.
Within the framework of this project, we set a limit. To start with, because we couldn't take on everything, knowing that our scope by the end of 2014 is to do what we call DevOps, that is, to go from the idea to building the entire process using the Kanban method.
We limited our approach to a three-axis strategy, which are 1. And the first, why we were called by this client, is what we call business satisfaction. The business was somewhat dissatisfied, both with the V-model methods, let's say, and the Scrum methods that had been implemented, because they had trouble seeing where things were going. They had trouble having a vision of the product and meeting deadlines. So we worked on this issue. Another was to improve quality, since there is a very high number of anomalies and significant technical debt, which obviously impacts keeping promises, which is the last point. Keeping promises, ultimately, is what our client initially wanted, how we can show that we can roughly meet our deadlines. Well, you will see, it's not magic, but we managed to respond, at least to their needs, thanks to predictability.
So how did we build this? Ultimately, we quickly understood, and so did everyone who does a bit of agile, that the very core of the practice is around the user story. If a user story is of quality, we will often have a quality product. So the user story, it goes from the idea to almost the end of the project, it follows the entire lifecycle. So we said, we will improve it. Then, to achieve predictability, we need to maintain, we need to control the size of this user story, to avoid having user stories that take 10 days of work and others 1. So we try to maintain roughly the granularity. How did we implement this? Initially, at the idea level, we worked with the business or sponsors on three different approaches. One called Lean Startup, which I think you all know, since there was a presentation yesterday by two of my associates. Another called Frigol Innovation, which may be new to some. Frigol Innovation is something that appeared in India and Brazil, and which, for example, resulted in France with the car, the Dacia. It's about how to do better with less money, simply put. And another classic technique that you also know in Scrum methods, what we call Sprint Zero, so product vision. Depending on the type of project we are going to support, we will implement three types of approaches. This first approach will allow us to identify the basic functionalities that you know. We will do what we call a story mapping. From this story mapping, we will build what we call a customer experience mapping, that is, a user cinematic. This cinematic will allow us to then create what we call the MMF, Minimum Marketable Feature, that is, what the client needs in the first phase to build their product.
And this already allows us to provide what we call our first indicator, which is the number of user stories per MMF. Our goal is also to control it. Currently, for example, we have MMFs that have 2 user stories and others that have 34. So we try to control it because, for us, the MMF is ultimately the demo. Unlike classic Scrum methods where we stop at the end of an iteration, in our framework, we are floating. We do floating demos, and the business, especially with this type of large client, has little availability, and we need to plan, so be predictable about the date. In relation to another indicator that we will see later, called cycle time, we manage to give words that are roughly predictive within 2 or 3%. From this observation, we will start working with the Product Owner team. So we will take a first MMF and build with them what we call their product backlog. So their product backlog, initially, I put three sentences because we implemented a technique that is quite quick to put in place. So the coaches arrive and impose a practice. Why do we impose this practice? Because we want the user stories to be built in a particular way to improve granularity and reduce anomalies.
So we impose limits, we took a simple fact for the limits, and the same for the transition rules. Nevertheless, depending on the experience of the teams, since at this client we have people who already have a very good knowledge of Scrum practices, a very good knowledge of Kanban, We adapt with them. But overall, as you have seen, we work on a scope of about 400 people. So we cannot allow, at least, since our goal is primarily to make... Let's say to... How should I put it... Industrialize Kanban at this client, ultimately. So everyone works roughly the same way, everyone speaks the same language. So initially, we are forced to impose a method. And then we will do what we call PDCA, meaning they will adapt. So now, for example, on the first pilots, the Kanbans have started to be modified because people began to master their user stories. So we have much smaller Kanbans.
So how does it work?
We have implemented quality control of user stories. So for the first thing, we have a product owner who will work on what we oriented our user stories towards, what we call the test case in BDD method, Behavior-Driven Development, I will show you an example later. So the Product Owner's job is to first identify the business need with the business. From the MMF, we take it and write the behavior-oriented scenario. As soon as the first nominal behavior is created, it is automatically addressed to a tester. This tester will act directly with the development team. This tester will directly add the 4 tests in the User Story in BDD format. Then, one or more people from the technical team, depending on the choice of the teams, will interact directly on the user story. to add technical elements if necessary. Since we removed the concept of technical user stories, we try to make them functional to show the business that they have real value. For example, at the beginning of our projects, when we started, we had many development teams that disappeared for two or three weeks in the first sprints to work on technical elements, and no one really knew where they were in relation to the business's expectations. They were not producing value for the business. So to avoid this somewhat negative image and all this feedback, we decided to integrate them directly into the creation of the user story. In addition, and transversely if necessary, we also integrate ergonomics directly into the user story. So we try to find, as we initially do in BDD, where it comes from, what it is doing, and where it is going in the wireframe that will be created. So from there, we will have a second indicator which is what we call the cycle time, meaning the cycle time to complete a user story at the PO level.
I’ll show you an example, it’s a bit obscure, of a user story that we produce. User stories have a format we call A4, and to manage granularity, we simply limit the number of scenarios. We have three types of user stories: S, M, and L user stories.
Ultimately, one scenario is an S, two scenarios is an M, and an L is three scenarios. Because sometimes there can be different scenarios. For example, for authentication, the first authentication will ask for your date of birth. This is the case at La Française des Jeux, for example. And for the second authentication, you won’t have to re-enter it. So we initially limit it like this, which means the first baseline estimate is made. The acceptance criteria are a bit more technical, meaning we will ultimately complete them in the form of a dataset directly so that the test teams can generate their dataset directly in the user story.
And then, we will add the wireframes. So from there, we have built what we call a product backlog. Here is an example of a product backlog that is generated, so a product backlog board that is generated with the PO team. So we have a very soft one, meaning these are the elements that are proposed and then accepted by the business. We specify the user story with the product owner. We generate the test cases, we define the acceptance criteria, and then they are estimated. There is a final estimation work where we will use the added value of a developer beyond their one- or two-day estimate, which is to truly work on complexity. We did some work where we say we have SML, then they look at it, and above all, what interests us is understanding if it is complex. We have already placed it in one of the SML categories when it arrives, and then they verify if it is complex or not, and a scenario can ultimately become an L, depending on its complexity. This allows us, in the end, to generate sprint plannings that last between... 30 minutes at most, between 15 and 30 minutes. We were able to reduce, for example, this kind of ceremony. And there is great satisfaction from the development teams. To give you an idea, the first project we managed took 14 hours to do a sprint planning. There are two people from this company who can tell you about it. So, a visual representation of this board. There you go, so for example, what does this correspond to? So here, we used, we’ll tell you right away, we borrowed a lot of things from Laurent Morisot’s books. So, there is an example, it’s what we call the cockpit. A lot of things about indicators. Obviously, we didn’t create everything, we reused a lot of things that had been done. We implemented them and tried to see how they adapted. So one of the first things is what we call the vision. So we try to show the business. So we grab rooms, we take rooms, we take walls as soon as we arrive and we set up everywhere. So here, for example, we have what we call the vision. So ultimately, we will ask one of the first things, which is that we will ask the entire team to agree on the product vision. And we write it down. And then, the objective. What is the objective? Three points. What is the objective of our application? From there, we will identify what we also call MMFs, we will identify the indicators, the throughput, for example, our throughput is two cadences, well, we will set the cadence, the throughput is 12 cards for example, and we will set the first lead time. And then you will see what we call a Kanban. There you go, and they are quite nice, normally ours are in color, this one is in black and white. So here, this is a first start. So, once we have put this in place, we start to... produce user stories that are of reduced granularity, whose size we control, and which developers now find much clearer. This is the first feedback we have. So there is already an increase in satisfaction, that’s the first thing, between the product owners and the development teams. Given that in our context, the development teams are not on the same site as the product owners. So, there were many, many communication issues and a tendency to blame each other. On all the projects we have coached, this point has completely disappeared. So now we will work on what we call system efficiency, meaning we will try to eliminate the first wastes. Because in fact, it’s good to make nice user stories, but if your entire system doesn’t work, if you have too many obstacles, too many anomalies, and other things, in fact, regardless of our user stories, our system doesn’t work. So how do we work on system efficiency? The first thing is to work on obstacles, everything that disrupts system efficiency. So obstacles are quite simple. As soon as we have one, we note it on a user story. The user story goes into what we call a fridge, because it is frozen, so we made a little fridge and we put it down. We generate what we call a small Kensei card. A Kensei card is just a sign so that everyone can understand. Because one thing that struck me when I was at DCNS is that all the stakeholders are a bit interested in what is being done. For example, two days ago, I just accompanied a new team that is doing Kanban. In fact, it's good to create nice user stories, but if your entire system isn't working, meaning there are too many obstacles, too many anomalies, and other things, in fact, regardless of our user story, our system isn't working. So how do we work on the system's efficiency? The first thing is to work on the obstacles, everything that disrupts the system's efficiency. So obstacles are quite simple. As soon as we have one, We note it down in a user story. The user story goes into what we call a fridge, because it's frozen, so we made a little fridge and we put it down. We generate what we call a small Kensei card. A Kensei card is just a sign for everyone to understand. Because something that struck me when I was at DCNS was that all the stakeholders are somewhat interested in what's being done. For example, two days ago, I accompanied a new team that's doing Kanban. And the first thing was that all the people around started coming to see what we were doing. There were people taking photos, people asking questions. So we try to have something on the Post-its we put up that anyone can understand. So the Kensei card is the same thing. Often, when I walk through Kanban boards where there are obstacles, I see 'production problem' to pull GW. I saw it not too long ago again. So we decided to put a Kensei card to make it clearer. The title, the cause, how it was resolved, the date, and who opened it. So with a more open dialogue that anyone passing by can understand what it is. So this Kensei card is positioned on its own little Kanban board next to it, which is also in what we call the Maurice cockpit. That's what we call it. There you go. Following that, we do what we call a kata. So it's like a retrospective if you will, but it's focused on an hour of work where we will work on throwing paper airplanes, as we did yesterday for some, around the A3. And we will generate what we call an A3 from the first indicators we have. So we take our indicators, which will be very factual, on the other side the team's feelings, and in one hour, we try to collaboratively bring together all the necessary stakeholders. For example, it happened to us with Ousei, who is here, one of the coaches I will introduce to you shortly, who put his 1 to 3 with a team, the business around the table, the product owners, and the development team. We had a problem with the project's arrival date, and we made decisions collaboratively. So we worked on reducing the scope, we added resources to then simulate what would happen with our indicators if we used this technique. So this is another indicator we will have here: the average resolution time for obstacles and the number of obstacles in the system. This will also allow us to have a new factual element to show, for example, we did it on a project, we did it on purpose, we've been waiting for an ergonomist for a long time on a project, we decided to block the system. We blocked everything, the user stories, and what happened is that now, one of the project's important stakeholders came forward and asked what was happening, why it wasn't moving forward anymore, and we told him, well, the ergonomist isn't here. And he resolved the problem immediately. We take the opportunity to send signals. Because even if we report obstacles,
Managers are not yet very agile at the moment.
Second point, reducing anomalies. How to reduce anomalies? It's a technique we used here, it's quite simple. As I told you earlier, we have the test team work directly with the Product Owner team. So they have a culture from the start of what the User Story will represent. Once the User Story is sent to the development team, we have a lead time, that is, an average crossing time between when it was written by the test team and when it moves to the test state, we have about ten days on average. So these ten days will be used to create what we call data sets. And then, this person, the tester, will accompany the team to do pair testing. So each team, each developer takes the user story of their neighbor and vice versa. We test them at full speed, according to the scenarios written in the user story. We note on a board all the anomalies that enter the system, and then we do classic XP, we do pair programming. There is a senior who sits next to a junior to explain why they made this anomaly and how to correct it. This exercise is done each time a limit is triggered. For example, as soon as there are 5 user stories in the test state, the tester informs their team that the next day, we will do pair testing. And on average, this pair testing takes one hour. In one hour, we do the tests that last on average between 20 and 30 minutes. And then, we resolve the anomalies. Sometimes we can go up to 1.5 hours when there are many blocking anomalies. This means that the level of post-done anomalies, that is, at the end of development, is almost non-existent for major and blocking anomalies. The acceptance teams, the QSI teams as we call them, were a bit skeptical about the working methods. They work in sequence, since for now, we have a large acceptance phase, often at the end of development. The first feedback since then is that they would like to implement it everywhere. They noticed that across all the projects that were accompanied with this practice, there are almost no more blocking and major anomalies. This means we can do an end-to-end acceptance, and the anomaly rate, which was about 60%, drops to 17% for the latest projects we have. After that, we track the number. We will have a new indicator which is the cycle time of anomalies, the number of anomalies in the system, and also, for predictability, the average number of anomalies per user story.
From there, We worked for about two or three weeks with the Product Owner team. We work with the development team, the implementation team, where we will have ergonomists, we will have QSI. We can also have cross-functional people, for example, there are people who do web services. So gradually, the web service people worked to propose mocks, meaning we can do local loops for testing. So the entire company is currently moving to try to reduce the number of anomalies. Everyone has understood that it costs a lot in man-days and project costs to meet this deadline. So the same principle, now we will have a product owner who will work on what we call demand-driven pull flow. This is currently the most difficult subject. It's trying to explain to people that they need to stop producing when the people on the other side don't have the capacity to do it. So what we did to address this issue was to create a mirror of the Kanban boards. When you are on the site with Product Owners, you see what's happening in real-time in Lille, since the site is in Lille, the developer. And conversely, everyone sees and everyone should see exactly what we would rather call a project portfolio, from the idea to the end of implementation, so that everyone can see the system's capacity and automatically if there are bottlenecks. So now the Product Owner will work according to capacity. So we will tell him that the team has the capacity to do 5 User Stories. So he will try to manage his time in relation to that. And on the other side, we will measure the throughput, that is, the number of cards we manage to complete. The number of cards is independent of Anomaly, User Story, or... Evolution. So here again, a new indicator, what we call the cycle time for user stories and the team's throughput in terms of cards. So now we will delve a bit into what we call indicators. From there, we built our system, we arrived, the indicators appear after the fourth week. So after four weeks, the whole team is working on it. The team adopts quickly, especially the development teams, who understand very quickly the advantage of working with a Kanban, because it allows for self-management and especially better communication, and, as a developer said not long ago, to have a complete vision of what is happening on the project. So our first indicators that we will produce are somewhat obscure indicators. What we will do is what we call KPIs, project indicators. Our indicators, often, few people use them. We aggregate them to then produce what we call KPIs according to the strategy I presented to you earlier. The first strategy is to keep our promises. So, showing predictability, that is, the project's arrival date. The second, business satisfaction. How do we do it? By showing the business value produced, quite simply. Since upstream we did Lean Startup and Frugal Innovation, we will be able to have an idea of the business value we produce. And the third is anomaly management. We will show the cost of technical debt and whether there is a reduction over time. So all these indicators are what I presented to you earlier. At each phase of the Kanban, we produce a certain number of indicators, the number of anomalies, the cycle time per anomaly, and the same for user stories. We add even more indicators that I haven’t included here, which are associated with the team’s workload, the cost of a person to show how much generating an anomaly costs, because for us, it is important that people understand that an anomaly costs a lot of money, and that this money is better spent on testing teams rather than on anomalies. So the first thing we will obtain is what we call predictability.
So this predictability, ultimately, based on the various indicators I showed you, the date in blue is the project’s start date. And the date below, the landing date, is the calculation of predictability. And there is a gap. So this predictability is managed in relation to what we call cycle time. And we have an error rate of less than 2%. And this time, compared to DCNS where I wasn’t ready in 2011, and live I didn’t have luck, Laurent Bozovic was right next to me, showing the lack of credibility in the numbers often presented in Agile. So it’s true that when I gave my presentation, I even removed some numbers just before doing it, when I was doing the one for DCNS, because I hadn’t worked enough, so to speak, to show the credibility of these numbers. Now, these numbers are reproduced every time within the projects. So we have 10, 15, 20, 30, we will have 80 projects that can truly measure whether predictability works. Our goal within this project, at least for us as coaches and when we want to create this wave of support, because there is still an objective, is for us to try to initiate people to Kanban because we have an advantage, a small advantage over Scrum. It’s that change management is much faster at the start. There is much less rebellion. People adopt it very quickly. Ultimately, it’s impressive, but after 4 weeks, everyone is in the system and everyone understands the advantages of the system very well. And what I notice, I see it with DCNS because the system still works, I still have access to the indicators, two years later there is no dilution of the system. So this is what we call predictability through cycle time. However, we have a small issue, since currently we are in a manual phase for entering indicators, we are moving to a computerized phase, meaning people simply move their user story in an electronic tool, while still keeping visual management, of course, since it has a big impact. Sometimes we have people who don’t respect what we call cycle time very well. So we added a new type of predictability, what we call throughput. We simply count the number of cards. So it’s our friend Kousei, whom we call Indicator Man, who does this for us now, who maintains it. So there you go, and we see there is a gap. I recently had fun gathering all the numbers. We still have a fairly significant gap. When we measure with throughput, we end up with between 10 and 15% deviation compared to the project’s predictability.
Compared to the base cycle time.
This indicator, on the side, you then have what we call estimates, right next to it, there are estimates of 10, 20, 30. This will allow us... We will provide a tool, simply an Excel file, which will allow, at the same time as we do the A3, to simulate all scenarios in the system. If we add so many obstacles, if we keep the obstacles, if we add 10 new user stories to the system, so in the case you currently see, no new user story has entered the system. We already have all the user stories. Tomorrow, I arrive, which actually happened to us, the business came and said, I need 20 new user stories. As usual, since it’s the business, everyone said yes. However, as soon as we simulated it and saw the date, everyone walked back the decision. That’s an example of what predictability is useful for. Afterwards, predictability doesn’t necessarily allow us to keep our promises. It doesn’t mean that independently, as we know very well in Agile, we don’t always manage to meet deadlines compared to, as our client says, they want the dates and the scope. It’s very difficult. But having a predictability indicator allows us to bring people together around the table and make collective decisions. And ultimately, since we’ve been doing this, the business teams who were unhappy with the project’s landing dates, there is great satisfaction on their part because they can see it in real time, since the indicators are there in real time, and they can especially provide assistance by asking for example for additional resources, by reducing the scope to meet deadlines. And since then, ultimately, the business has increased its satisfaction. And above all, we work a lot with... working on what we call problem solving, resolving issues, to improve the system in order to have better predictability. So since the system is becoming more and more efficient, for example, developers were telling me not long ago, well, I’m working on why I’m paid, that is, do I like development. There are fewer and fewer problems in the system because everyone is aware that problems cause a delay in the project. And thanks precisely to these predictability indicators, we can see it in real time. I’ll just show you, for information, a few graphs that we produce. We have plenty of graphs because we build the graphs in direct relation with the portfolio managers, that is, the people who manage different projects. There are people who have a very indicator-driven culture and others who don’t at all, who have trouble reading them. So we will create what we call happy boards for them, two or three numbers that tell them your project is here. It is at 78% of the arrival date and there will be a slight delay. There are others who love indicators. We have people who are former Accenture managers or others who will ask us for value indicators, input-output indicators. So the first indicator we see at the top is what we call cycle time, simply, classic, taken from a well-known book by an author who is in the room. So, this is an indicator we have and which clearly shows the work of coaching. At the beginning, we had a very high cycle time, meaning a very long time to complete user stories. And little by little, after 8 weeks, since this is the result after 6 weeks, we have... reduced the cycle time. It’s not magic, it’s simply that we removed many obstacles, worked on very good user stories, and what also resulted is that we reduced anomalies thanks to quality user stories. This is one of the most important points raised by all the stakeholders: the quality of user stories. Another indicator I’m showing you, which is right next to it, is the input-output at the level of user stories. And we see that the gap is small. As soon as something enters, it always comes out at roughly the same time. and which clearly demonstrates the work of coaching. At the beginning, we had a very high cycle time, meaning a very high time to complete user stories, and gradually, after 8 weeks—since this is the result after 6 weeks—we managed to reduce the cycle time. It's not magic; it's simply that we removed many obstacles and worked on very good user stories. And ultimately, this also resulted in reducing anomalies thanks to high-quality user stories. This is also one of the most important points raised by all the stakeholders: the quality of the user stories.
Another indicator I’m showing you, which is right next to it, is the input-output level of user stories. And we can see that the gap is small. As soon as something goes in, it comes out at roughly the same time. So we control it. The idea isn’t to absolutely reduce the size of a user story, so the cycle time doesn’t make any sense. We would end up generating way too many in the end. But it’s mainly about mastering the efficiency of the process. It means that when something enters the system, it comes out at roughly the same time. Below is what we call an anomaly histogram, the cumulative flow that you see. There’s one indicator that I care deeply about, which we added to the dashboard and didn’t exist for us before. This is what we ultimately call the calendar. We measure it—we put a calendar in the room, on the dashboard. Where all the stakeholders wrote Monday, Tuesday, Wednesday, and opposite, it says ceremony, meeting, absence, workshop. My absence, vacation. And in fact, every time someone goes to a workshop, a meeting, or a ceremony, and when they return, they simply note the number of hours. So it’s not assigned to a person; it’s for the group. And we collect all the sticky notes, all the hours that were written, and we aggregate them. And we get this weekly indicator. For example, blue represents vacations. In fact, this project started right in the middle of summer. So there are a lot of vacations. Nevertheless, there’s another color we see: purple. The purple colors represent meetings that have strictly nothing to do with the project. Where people spend their time in meetings, a bunch of things that have nothing to do with it. Ceremonies are in red. We can see it, and it will allow us to reduce the average time of a ceremony, making it as efficient as possible for production. At times, when we started, there were retrospectives that lasted between 2 and 3 hours, where we were in long discussions about what we had seen on TV. I once attended a meeting where people spent an hour comparing TV series. I was very surprised. But why? Simply because the Scrum Master had never been trained to manage, ultimately, a simple ceremony. For example, what we’ve just done—and I’ve just ordered them—is that we’re simply buying, for example, hourglasses and placing them in the room so everyone can see. Time is gradually measured with them, and they become more and more efficient. And we put up a few messages in the room, for example, we tell them one conversation at a time, we pass on the message that we’re not here to attack others, and so on. Perhaps we’re instilling. little by little, the art of meetings. And what we mainly see now is that, ultimately, we consume about 200 hours per month in meetings for a team. So our goal was to reduce them. And here too, people were able to spend more time doing what they were paid for, and especially what they wanted to do—developers, for example, who told me it was very pleasant. With fewer and fewer meetings, we’ve reached teams where currently the average meeting time is 50 hours per week for the team, and we’ve dropped to 2 or 3 hours.
To conclude, I’ll introduce you to the team. Here’s part of the team. So here, you have Koussey, who is taking a photo. It’s quite lifelike. You have Gilles Mantel, who is the offer manager at Gilles and a true Scrum purist, who struggles a lot with me. There’s Renaud Chevalier, who had the opportunity to implement what we call Future Time in a large project around the transformation of PMU to iPad and who joined us. And we have the team, what we call the team. I haven’t had time to draw them yet, which is why we didn’t do it. We have Benjamin, who is here, directly from the company and doesn’t work at Xebia. Who will be certified as a coach at his group’s request, Kanban, and who works directly with me. And another person named Benjamin Marlière, who is in the back, also working for this group as part of this project. Then we have other coaches in the field, like Ludovic, Gwenaëlle, Elodie, Marc. Who are on the team. To give you an idea, at the beginning of 2014, we will have 10 coaches simultaneously on the project managing projects. So it’s an industrialization project. This project has since gained a lot of attention; we’re also scaling it up for smaller teams. So I’m currently implementing it for a well-known client on Boulevard Haussmann, who has very large department stores there and is creating what we call a marketplace and transitioning to this practice. And here we’re working with small teams, people who are more in startup mode because we’re creating a marketplace, and they adapt very well to this approach. We haven’t removed Scrum at all, I assure you. You still have the business roles of the Product Owner, the classic Product Owner, with the same responsibilities; we try to maintain them. The Scrum Master is still there, with the same responsibilities expected of them within the Scrum framework. We still have sprint planning, demos, retrospectives. Simply, obstacles are managed in kata, and retrospectives are used for introspection, meaning continuous improvement within the team. We focus the kata on managing obstacles that arise and the retrospective on working on ourselves. How we improve ourselves within the team.
The difference is simply that we’re no longer in an iterative approach; we’re in continuous flow, and all instances are floating. So as soon as we feel there’s an obstacle, we do a kata. There can be three katas in a week. It can happen, and it often happens at the start of a project. Retrospectives... For example, it often happens at the start of a project that the Scrum Master and the Product Owner... There’s a bit of friction, so we put them in the room with the whole team and learn to manage these frictions. So there will be a lot of them at the beginning.
Demos are associated with what we call MMFs, which we saw earlier, when we have a marketable product, meaning almost a deliverable. Because we had a problem: at the beginning of Scrum practices, at least with this client, the business teams, the stakeholders, the sponsors were there, and gradually, they weren’t anymore. Because, as they said, I’m not interested in seeing a form. So we waited to have a complete product, where we can go from end to end, that meets a first need for the guests. So we invite them sometimes after 15 days, sometimes a week, sometimes three weeks, but we invite them when there’s real value for them. We can still do small internal demos because we’re lucky to work on some projects that are on iPad. So we create IPA files and send them to everyone, and everyone can test in real time. We always keep the concept of sprint planning. So the sprint planning will be very short because we’ve already done some prep work with them. In fact, the teams see the user stories very quickly. We no longer see user stories in batches. We see them in groups of about 5. So in one hour, most of the time, they already do what we call a magic estimation, SML. It’s very quick. And often, there are a few user stories left, one or two, that seem a bit complex. We set them aside. And then they can discuss them. We can add planning poker to it sometimes to speed things up, and we’ll put a small hourglass so we don’t spend hours and hours talking. And what Koussaï did at the beginning was measure the cycle time for different types of user stories, according to sizes, S, M, L. And we realized it was useless. It’s true that sometimes Ls are developed with the cycle time of an S and vice versa. But it doesn’t matter much because we take the average. And we realized... It's the mathematical principle of what we call regression to the mean. A very simple example, every day you leave your home to go to work. Well, I live in Paris, it's not fantastic, so I take the metro, and so on, but I know that on average, it takes me 20 minutes to get there. I didn't need to calculate all the red lights, all the intersections, and so on. So there's no need to go into detail to be able to estimate. Nevertheless, there are days when it takes me 30 minutes because the metro is late, others when it takes a bit longer because suddenly there are fewer people, or it was a public holiday, or so on, and I've gone to work on public holidays before—I had forgotten. And so I go a little faster that day. And in fact, I sometimes have fun calculating it. I always come back to the average. There's what we call regression to the mean. So we don't really need to estimate as soon as we measure the average time. So we actually remove this notion of estimation to focus more on the complexity and the contribution, the added value directly from the developer in the user story. They built it with the... As a result, currently, the teams have much more satisfaction. I took a few quotes from within the teams; there are plenty, they come out every day. To be transparent, it works very, very well, everyone loves what's happening, and there's a big buzz inside, and we take advantage of it to make the system work.
What we need to do is be very careful, because people would like to go much faster. The testing teams now want to be fully integrated into Kanban. The production teams are very interested in DevOps. Everyone wants to do it. So we will still do presentations to tell them that one day we will work with them. And we explain to them why we aren't going too fast. Why we are working on the Product Owner and the Scrum Master, who are people with whom we have an easier time managing change. Honestly, managing change with the business side is a huge and very difficult task. They don't understand until they've seen it. For example, blocking anomalies, as I was telling you, are almost nonexistent. We no longer have them. And ultimately, we are reducing the company's debt. I can finally focus on development. It's a person, a developer, who told us this; he says it again in an email. The result was impressive; it's one of the portfolio managers.
The projects he managed always had, he told me on average, one to two months of delay; that was the minimum. The project we're currently doing, according to the latest news, had 20 days, I think it's gone down to 1, that's right, it gained a day. For him, it's already impressive that we deliver on time with the same scope.
And the last thing is that I see the project in real time, which now reassures people, and the indicators reassure the business people, knowing that we need to be transparent. For now, they aren't very involved; they're watching, but there is a new business culture, oriented toward digital within the group, which means that more and more, we have business people who really understand the web, who understand what's happening, and who are very interested. We have new players coming in who... Come to see what's happening and work with the indicators with us, trying to create what we call MMFs together. Who, instead of, as we heard a lot when we started, want things with a fixed scope. So I'll conclude—I don't know if I'm on time normally—I have 10 minutes left, wow! Okay, we're going to do a planning poker.