Clément Rochas & Anis Chaabani
Transcript (Translated)
[00:00:02]
Hello everyone. Thank you for coming in such numbers. Apparently, since we're full and the doors are closed, we can start earlier. So either we finish earlier, or you have a lot of questions, so we will be, we will be delighted.
[00:00:17]
Go ahead, you can start. So today we're here to talk about measuring team performance and, above all, what we tried to do based on a framework you may have heard of called Space, which, I think, given the informed audience, should be somewhat familiar. But most importantly, we're going to talk about change management and what we were able to do to really make it happen in the company. A little fiction to start, it's a humorous tone. Uh, so we have in a in a new company, Agathe, who is CEO. Uh, she manages a fast-growing company. Uh, the teams have grown a lot recently. Uh, so a lot of investment in the payroll, and she would like to know now if this investment has an ROI.
[00:01:13]
Uh, well, she knows the saying about parallelization, the nine women who can make one woman a baby, all that. I'm not going to I'm not going to repeat it. Uh, she's an engineer, so well, she tells herself it's not that complicated, and since she coded a bit, generally, she tells herself it could be done over the weekend. It's a fairly well-known bias. Uh, let's also show a bit of empathy. She has many, many other topics than tech in her company. Uh, there are topics uh related to customers, related to strategy, she has shareholders. Uh, she knows the competition well. Uh, there are regulatory problems. In short, her mental load, it is it is also perhaps higher than our little cases of engineering people very often. Uh, she's been asking for reporting for a year now, and uh it's not happening. She still doesn't have visibility on whether uh all these hires have brought productivity. She's a bit fed up, soon she'll be counting the lines of code, and so you might have to, as of Monday, uh, count how many lines of code you produce per week.
[00:02:25]
Next, we have Gordon. Gordon is an engineering manager.
[00:02:29]
Uh, he's on the field. He knows his team is working, he sees them working every day, and he also puts in the effort. Uh, he even has the impression that they work more than what's reasonable and that it's not sustainable.
[00:02:46]
Uh, regarding performance, for him, the blockers are mainly decisions made outside his team, and often, it's even more about non-decisions that block him.
[00:03:00]
Uh, and above all, if he has objectives to measure things he doesn't really understand, he'll do it, but he'll give what people want to have. It won't necessarily be the reality.
[00:03:15]
And then we have a consultant who just arrived not long ago, named Yves. And he has a mission that comes directly from management. He needs to go fast. He likes to reuse templates or methods he already used in his previous mission.
[00:03:30]
Uh, the teams are not always available on the day he is available, and he has to do his workshop. So that also creates some some friction. And often, he doesn't necessarily have the time and sometimes the competence to understand what the real context of the team is and why such or such a measure uh doesn't work in its in the team's case. So that also potentially creates quite a bit of frustration. So here are three personas. They all have needs regarding performance measurement. They are all in good faith, we cannot doubt that. But on the other hand, it will be complicated to align them and to make everyone want to do something that benefits both them and the company. And we will especially focus on the fact that the engineering manager and their team can measure things that allow them to improve or to ask for help effectively.
[00:04:30]
This is another slide. Uh, my name is Clément Rochard, I've been uh at Scaleway for uh 6 years now.
[00:04:40]
And here's. And my name is Anice Yabani, I'm an agile coach at Scaleway as well, for about 2 years now. And I'm also in charge of a recruitment and junior onboarding program, which is super cool. If you're interested, we can talk about it after the conference.
[00:05:00]
Uh, a quick word about Scaleway. So Scaleway is a French cloud provider. Uh, we have our own data centers in France with French law. Uh, and uh we are developing a cloud provider offering, so of instances and managed services uh uh which is made in France and in Paris, Lille, and uh remotely in mainland France.
[00:05:30]
Just to add uh for the context of what we're going to talk about today. Uh we have about forty development teams at Scaleway, between 5 and 10 people per team. Uh they are all managed by engineering managers, who are proximity managers uh in these teams, and we have roughly uh these 40 teams which are divided into tribes of about fifty people, which aim to be as autonomous as possible uh to carry out their actions. So meta teams of about fifty autonomous people, and within them, teams uh and about forty teams in total.
[00:06:12]
And so today we're going to talk about the productivity of development teams, as you understood. So we're going to start with a little word about productivity. Uh, so it is the contribution of one or more production factors. So it's about identifying these production factors. Uh, productivity, generally when we talk about productivity, we talk about output, effectiveness, efficiency. Uh, it's also the measurement of the impact of human actions. Uh, so when it comes to a certain sector, for example, making pants, we can easily, almost, in any case, we have the means to measure productivity by knowing how many pants per day, for example, we will we will manufacture. Uh, however, when it comes to software development, it's a bit more complicated. So uh we have already tried solutions in the past, measuring the number of lines of code uh function points for those who knew that uh what is uh Commo and company tried to measure things. So, it was never really satisfactory for the people who really tried to do that. Uh, so the idea today is to test something else, to see other alternatives to these methods.
[00:07:31]
So, uh, when we started to take an interest in this subject, quite logically, we turned to Dora. Dora, for those who don't know it, was first mentioned in the book Accelerate, which talks about DevOps, which is actually research led by Nicole Forsgren with other researchers. Uh, and so Dora is a way, a framework for measuring team performance through four main metrics, on two axes. So a velocity axis, the two metrics are deployment frequency and lead time for change, or the time between the first time we start pushing code, so committing code, and the time it reaches production. Uh, for stability and the percentage of failure following a change. So following a serious problem after a change. Uh, and the time needed for recovery once there's a problem in production. And so there you have it, velocity plus stability gives us a good indication of team performance. What will be interesting is that from there, they continued to conduct these studies via assessments uh with teams, several teams in several different companies. And they regularly publish reports called the State of DevOps Reports. The latest one is from 2023. And they will classify these teams uh according to the scale shown here. So elite, high, medium, low. With for each level, so here, deployment frequency for elite, it will be on demand, the change lead time, it will be less than 1 day, etcetera. And what will be interesting here is that you too can measure these indicators and position yourself in relation to these teams. With updates every time, etcetera, it's regular.
[00:09:39]
And so, uh, from there, we continued to follow Nicole Forsgren's work primarily.
[00:09:47]
And that led us to the Space framework. So what is the Space framework? It's an article that was published in 2021 by, once again, researchers, including Nicole Forsgren, uh, and which states that we can go further, actually following Dora, it's a bit of a continuation of the research they started on Dora. Uh, by using uh a somewhat more global vision, so by looking at things from a slightly more global and broader scope. So he starts by talking about some myths when trying to measure productivity. The first is that productivity depends entirely on the activity of developers. Generally, that's not the case, it's rarely the case. Uh, second myth is that productivity depends solely on individual performance, meaning that performance is the sum of the performance of individuals in a team. That's clearly not the case either because there are many other factors.
[00:10:54]
Uh, third myth is that a single productivity metric can tell us everything.
[00:10:59]
Like the number of lines of code.
[00:11:02]
We know that didn't work in the past. Uh, fourth myth is that productivity measures are only useful for managers.
[00:11:10]
Whereas normally, the primary people concerned are the developers themselves. And finally, that productivity depends solely on engineering systems, meaning the developers' tooling. So if we give better tools to developers, they will be more productive. It's not enough.
[00:11:30]
So what is Space finally, what is the value proposition? So it's a way to think more rationally about developer productivity, to choose uh more carefully the measures, so as to understand their limits and especially to put them in context and to see when we are in the case of a context, its property.
[00:12:02]
So more concretely, uh what this framework brings us, it will be three levels and five dimensions. The dimensions, so it will be to measure this performance on an individual axis. So a developer. The second axis or level will be the group or the team, and the third will be the system, meaning several groups that will work together to deliver an end-to-end product. The five dimensions, so it will be uh the acronyms of Space in fact. So the first is satisfaction and well-being. The second is performance, activity, communication, and flow efficiency.
[00:12:50]
Some examples of metrics, if we focus only on the dimensions. Uh for satisfaction and well-being, so it will be for example, developer satisfaction via surveys for example. It will be turnover or retention. Satisfaction with code reviews, uh satisfaction with tooling, the engineering system, etcetera. We can also see that uh Dora is actually part of Space. Since these are metrics that can be classified according to these five axes.
[00:13:29]
I'll let you see a few more metrics. For example, in efficiency and flow, we can find everything related to interruptions, the time of interruptions, flows, flow velocity, the time needed for code reviews, etcetera. So what to remember here is that you need to take measurements on several axes. And not just one.
[00:13:56]
Moreover, Space tells us that we need to take measurements on at least three of these dimensions.
[00:14:08]
So there, it's a proposal. No, it's fine. So there, it's a proposal that was initially made in an article. So now, we'll have to see how to put that into practice. And so we've been trying to do that at Scaleway for a little over a year now. It's been about a year.
[00:14:31]
And so, uh, what was our need, actually? Why did we want to measure this productivity at Scaleway? So we needed to measure the impact of team actions uh and also managers' actions. We needed a common approach because there were already initiatives within the teams, but they were isolated. So all these efforts had to be pooled. And we also needed assistance and resources, meaning some teams told us, well, we're willing but we don't know how to do it and you need to help us.
[00:15:10]
So the choices we made, we absolutely wanted it to be a bottom-up approach. So to include the teams in these choices, not for it to be something imposed on the teams that will very quickly be criticized and not implemented. Uh for that, we made a choice, which is that we will focus, so we will be focused on the group level, the team level. The individual level doesn't interest us much because for us, the unit is the team.
[00:15:42]
Uh, and then the system level, that would be a next step.
[00:15:48]
So, uh, we also made the choice that this approach should be collaborative, meaning it shouldn't be implemented by one or two people, but rather by a working group. Uh also that this approach should be useful for the teams above all, and then also for the managers.
[00:16:10]
And so, what did we do? So we started by bringing together motivated people. Uh for that, we used a guild, a community of practice within our company, which is the Engineering Manager guild. I'll let Clément talk a bit about this guild.
[00:16:25]
A little digression to explain this system to you. We have two or three guilds, uh well, I'll redefine the word later, uh that operate at Scaleway. So you remember earlier, I was talking about six, six to eight different tribes, and particularly for this community of practice of engineering managers, whether they are in their teams, or potentially, because we have a few engineering managers in the IT department, which is different from engineering, and we even have some on the operations side where there are a few teams doing software. And so we created this community of practice so that all engineering managers, at their level, can share about their role as engineering managers. So only the engineering managers work together and talk about their job or their position as engineering manager together to find solutions, to improve, to set up working groups to tackle topics or make proposals to management, things like that. It's been two and a half years now, I believe, that we've been working, and particularly with this EM guild, it's the first one that has really uh functioned uh correctly. And uh from this guild uh well, first of all, it's an exchange, we do approximately uh every two weeks uh an hour and a half uh session. Generally, the agenda is not defined, or at least, we have a page that is created and topics that pop up during the 15 days, and then they are addressed by this autonomous group with the minimum possible animation, at least from a single person. Uh, and so the group self-animates and tackles the topics one after the other. And sometimes, they decide to create a group of three or four people who will work on a topic and present their findings a few weeks later in another guild session. For the little for the little joke, the way we've been operating for quite some time now, and it works well, is that the first 15 minutes are called 'café crème'. Uh and so, it's a moment where everyone comes with their coffee, so it's remote. Everyone comes with their coffee, discusses, we start organizing, adding topics to the agenda, and uh and then around 2:15 PM, uh we know we start, whether people are there or not, we start and we go through the topics one after another. That's a bit of the life of a guild at Scaleway.
[00:18:54]
And so, we based ourselves on this guild, we submitted the proposal to them uh that we had thought about with Clément at the time, which was well received by uh some OEMs. So we asked these OEMs to be our early adopters, so to start testing the approach. Uh give us feedback and above all to keep it alive, to make it evolve with us. Moreover, I see one in front of me.
[00:19:21]
Uh, and so these same EMs became ambassadors to share the successes they had, so also share the feedback they received with the rest of the teams, and also to support the rest of the teams in implementing the approach.
[00:19:40]
Uh, so concretely, what did we do to implement all this? So we had said that the teams needed help but also resources. Uh, so we put in place a number of things. We'll go directly into the details. So first, we set up a portal on our internal Confluence, which will list all these resources, all the explanations about the framework, and so on.
[00:19:22]
So it's even the EM became ambassadors to share the successes they had, so to share the feedback they had, with the other teams and then to also support the rest of the teams to put the approach in place.
[00:19:41]
Uh, so concretely, what did we do to put all this in place? So we had said that the teams needed help, but also resources. Uh, so we put in place a certain number of things. We'll go directly into the details. So first, we set up a portal on our internal confluence, which will list all these resources, all the explanations about the framework, etc. We also set up a catalog of metrics, a view that is a little prettier. So this catalog of metrics lists a certain number of metrics that we found ourselves, or that we already know, or that we took from other frameworks like Dora, etc. Uh, or even Devex for those who know a little. Uh, but also the teams could themselves propose other other measures, quite simply.
[00:20:38]
I would also add the interest of making our own repository is that behind each of these metrics that we add as someone wants to try to do it, uh, we try as much as possible to provide the ability to do it in the context of the company. That is to say, we will have scripts. Uh, some teams will develop scripts to go look for the metric, and so they will share their scripts. Uh, in everything that is uh, rather surveys and perception.
[00:21:10]
Uh, we're going to set up, uh, we also scripted the generation of small surveys so that each EM can generate their survey, send it to their team, retrieve the information from small surveys of three or four questions. Uh, and so, he generates it, he modifies it if he wants to play with it a bit, he sends it, and so we facilitate the use and application as much as possible. It's not enough to say to measure this and that, uh, you also have to give the maximum of tools to go find this metric in the context, in the IS, uh, which is ours.
[00:21:47]
That's right. So there will be measures that are taken from the literature and others that will be really specific to the company. Uh, for example, we have a notion called doctoring at Scaleway. It's a person who takes care of uh, a bit of the run uh, in turns, in fact, in the teams that are the entry point of the run. Uh, so for example, there were metrics on that uh, to measure a bit the effectiveness of that system with clearer measures. So there you go, things a little more specific. And so the main idea also is that this catalog is enriched collaboratively by the whole company, the whole engineering.
[00:22:27]
Then we also set up a bootstrap workshop. So it's a workshop that we propose to lead, so us, the ambassadors, to new teams that would like to get involved. Uh, this workshop allows to have a first reflection, uh, understand the approach, but also leave the workshop with their first metrics to put in place. That means that the metrics are built collaboratively by the teams. That also means that there are no, there are not the same metrics for all teams. Each team will have its own metrics. So that's very important, and it goes with our bottom-up approach because for us, uh, the teams don't have the same context, don't have the same problems, uh, and shouldn't measure their performance in the same way. That also means that we cannot compare between teams based on these metrics. That means a lot of things. But in any case, it's an approach that is accepted, which allows to measure this performance, but also focuses on objectives, what we call objectives. So at the beginning of this workshop, all we ask is that the team comes with one or two objectives.
[00:23:37]
What are objectives? They are areas for improvement. Okay? And so the workshop will be built around that, to say, okay, to improve on this axis, what are the measures we need to put in place to be able to measure the current state and then the impact of the improvement actions that we will put in place afterwards via these measures. Uh, how does it work? So, collaboratively, we ask them to choose metrics either in the catalog or to propose new metrics and to classify them according to the five dimensions in a first step. Afterwards, we will do a dot voting to choose only five measures. Otherwise, if we put a lot in place at the same time, we'll never get there. So we choose five metrics.
[00:24:25]
And uh, very important, these five metrics must be uh, distributed over at least three dimensions. To come back to the proposal of safe, of space, which says that uh, we must have measures on at least three of the five dimensions to have different perspectives, to see things in a way that we have different points of view. Uh, and so, at the end, here we have our five metrics, and the next step will be to put these metrics in place. To use them, uh, so it takes a fairly long time, generally a quarter, to put them in place, use them, see their impact, etc. And then to repeat this workshop regularly every quarter, for example, to see if the metrics still make sense, if we keep them, if we delete them, and what new metrics to add. With notably new objectives.
[00:25:27]
Then there remains the question of how I will publish the result of my metrics or what I make available outside of the team. And so, we offer them a report template. Uh, so it's not a dashboard, it's really a written report. Uh, it's text. And so a template that they can take up, rework, adapt to their sauce, etc. in which they will explain the context, the metrics that interest them. Explain why we had these results, what actions to put in place to improve, etc. So really provide measures with a precise context and not just have numbers. Because numbers can be deceptive, uh, they can be subject to several different points of view.
[00:26:23]
And then around that, there are a lot of other more technical tools. So we're going to use, for example, at Scaleway we use a lot of GitLab and Jira. So we'll have the GitLab and Jira APIs that are very used to retrieve all kinds of data to calculate the metrics. We used, uh, we tested in any case Apache DevLake. Uh, it's a tool, uh, so from Apache, which uh, allows to calculate automatically, for example, the Dora metrics, among others. Uh, in fact, it will pump the data, uh, via several sources, for example GitLab, Jira, etc. Uh, it will put all that in a database and it will provide a Grafana, a Grafana dashboard with which we can play with predefined metrics and others that we can add, quite simply.
[00:27:15]
It's uh, it was a bit average anyway, we couldn't push things too far with it. Uh, because it's quite pre-fabricated. There are a lot of things that are predefined. Uh, but depending on how the teams use the tools, the data they have, etc., it can be very complicated to adapt it to our sauce.
[00:27:37]
We also had issues, given the number of teams and the rights, uh, as everyone doesn't have access to the code and uh, of all the teams, uh, having an instance, well, you can set it up uh for your team, a dev lake, but if we wanted to have a dev lake managed for everyone, in fact, we would be blocked on the on the authorizations, it would have been necessary for him to have all the rights on the whole platform. That's it. Let's say it worked in a simple local context.
[00:28:07]
When you want to scale and use it more widely, it was a bit complicated. Uh, so, there's a person in particular who had proposed doing that on Redash. Uh, so it's a bit more technical, it needs more code, etc. But it works pretty well when you make the necessary efforts. And so, uh, he then shared what he had done with the other teams so that it would also be taken up, and it worked pretty well. Uh, there were also internal tools that started to emerge. People who started to develop things, scripts, tools, etc. Uh, and then, we've already talked about it, reports for restitutions, etc.
[00:28:49]
So, that's about the tools we put in place and the resources. Uh, so by putting all this in place, we learned a number of things that work when you use space. So that the measure of productivity is above all a tool for continuous improvement for the teams. If we see it otherwise, if we think it's a tool first for managers to try to make the right decisions, which is absolutely true. Uh, it doesn't work, in fact, because the teams will tend to provide you with the figures you want as a manager.
[00:29:27]
That, you know, it's quite simple, so the indicators, the watermelon indicators and company, so it's very simple to give you, like in the case of Gordon at the beginning of the presentation, you can totally give the manager the figures they expect. On the other hand, the fact that it is a tool for continuous improvement for the teams, uh, that's how it will give rather good results.
[00:29:53]
Second thing is that you have to see things quite broadly. So Space offers us to have metrics on at least three of the dimensions. So we come back to the story of perspectives, and that's very important. You absolutely must not fall into the trap of abandoning them.
[00:30:15]
Uh, also, you always have to share with an interpretation, with the context, and not share dashboards with numbers. Dashboards with numbers will necessarily lead us, in any case, they will lead managers to compare between teams, to ask, to ask for metrics that would be the same to be able to compare. While often the contexts are not the same, the constraints are not the same. These are often things that are not comparable.
[00:30:46]
You also need to carefully choose the metrics that you are going to share outside of the team. That's very important because we can have a lot of metrics in the team that we will use for continuous improvement, that's very important, but we can choose as a team not to make them public, at least not all of them. So you have to choose which metrics make sense to be shared, uh, that won't create unnecessary problems or misunderstandings. And so this work, which is to be done as a team, not by a manager, is very important, so you have to choose the metrics carefully.
[00:31:21]
to make public.
[00:31:26]
You also need to use them, because it takes effort to put all these metrics in place. So you really have to use them to the fullest, uh, especially during retrospectives. So retrospectives are often a good time to bring out these metrics to say, okay, so we had taken actions during the previous retrospective, uh, here are the measures we had put in place, here's where we were and where we are today.
[00:31:58]
That's about Space, and so to put all that in place, uh, you knew well, it doesn't work by magic, so we had to put in place a whole change management approach.
[00:32:13]
And I'll let Clément tell you about this change management approach.
[00:32:21]
Who knows Kotter's steps?
[00:32:26]
Ah. I would have thought more. Great. Well, the slide is, the intro slide will fit perfectly. Uh, Kotter is someone who worked on change management. Uh, in short, described a system in eight steps, uh, which for any change uh can be applied. The first is to create a sense of urgency. Uh, it doesn't mean that, well, you have to understand that before starting to initiate a change, you have to be sure that everyone is well aware that it is important to make this change now. And so you have to create the conditions to want to change. Uh, then you need to form a coalition, meaning a small group that will work on the subject and that will uh consequently have autonomy with people who are both from the field, from management, and so the coalition must have a mandate to uh make this change possible in the organization, whatever you find. Then you need to have a vision that is realistic. You need to communicate it. You need to incite action.
[00:33:40]
You need to generate short-term victories, so small steps, that might remind you of things in what we say when we talk about agility. You need to consolidate uh this success uh for more change and then you need to anchor what we have gained and therefore make it part of the company's culture. So these are the eight steps, broadly speaking, to be respected in a change management, in any case, according to Kotter. Uh, in short, described a system in eight steps, uh, which for any change uh can be applied. The first is to create a sense of urgency. Uh, it doesn't mean that, well, you have to understand that before starting to initiate a change, you have to be sure that everyone is well aware that it is important to make this change now. And so you have to create the conditions to want to change. Uh, then you need to form a coalition, meaning a small group that will work on the subject and that will uh consequently have autonomy with people who are both from the field, from management, and so the coalition must have a mandate to uh make this change possible in the organization, whatever you find. Then you need to have a vision that is realistic. You need to communicate it. You need to incite action. You need to generate short-term victories, so small steps, that might remind you of things in what we say when we talk about agility. You need to consolidate uh this success uh for more change and then you need to anchor what we have gained and therefore make it part of the company's culture. So these are the eight steps, broadly speaking, to be respected in a change management, in any case, according to Kotter.
[00:34:15]
Uh, first, our sense of urgency. Uh, we were coming out of two hyper-growths in the company. We went from 150 to 400, then to 600, uh, in less than four years. Uh, and we are in a new phase of stabilization. So, uh, we continue a growth but slower, and above all, it's time, it's a bit what I was saying earlier, where after the need for hyper-growth and uh, so growth on steroids sometimes uh, not always thought through, now that we have the right size, we need to stabilize and uh, and give some meaning to everything we've done.
[00:35:01]
Our feeling was there because when the teams say to themselves, we're going to have to start measuring and showing how our teams are performing, how they are measured, it's also taking the lead and participating in this intelligent stabilization action.
[00:35:19]
We created this working group, as we talked about earlier, thanks to the guild who, in the field, uh, became aware of this need and started working autonomously on the on the work.
[00:35:33]
Uh, we created material, we, uh, took five motivated teams, uh, who started to test the workshop that Anis was talking about, uh, who started to manipulate the different metrics. We co-constructed in this group, we, we shared things and especially then we were able to show the guild to start with, uh, the results of our of our experimentations.
[00:36:05]
Then for the communication part of the vision, once we had this first exercise during a quarter where we had five teams that had done approximately the path and went up to a report on the effectiveness of the team's performance. We organized a presentation with the members of the steering committee, the HR, uh, the managers above, so the engineering managers, and so we uh described the whole operation and we wanted to do it with precisely all these stakeholders, uh, well, so that there is no misalignment with other initiatives at HR, with talent review subjects, things like that. Uh, regarding the two product and engineering departments that could have done things, uh, sometimes that needed to be visible on both sides. So we really put everyone in this room to show the result of the of the of the work. To incite action, we did it through a lot of little things. Already, we produced all the templates so that the engineering managers themselves, when discussing their objectives with their manager, arrive with pre-made objective templates. So here it was the managers who went to see their superior, saying, well, in my objectives, rather than inventing one every time, we're going to try to use this one. That's a bit the idea, so the incentive was there. Some tribes worked very, very well like that. And moreover, uh, with their Head of, we decided to put the same objective for all, uh, all the engineering managers, so they all know they have the same objective and in addition they can work on it together and uh, and co-work on the subject to uh, to achieve something useful.
[00:38:09]
Uh, organizing reviews between managers of deliverables, uh, that's what we do now. Uh, we try as soon as the quarter, the end of the quarter arrives, uh, well, we use, well, the quarter reports that you saw there on paper. Uh, we present them and so the managers can ask for re-readings, uh, to other other managers or to the ambassadors of the first working group. Uh, that allows to challenge, to reflect on actions, to share initiatives that could have been done in several teams and therefore to parallelize them. And uh, to consolidate, uh, today we are in a phase, and so here we are, we are starting to get into the future of the of the approach. We are in a phase, as I told you, we are in a new reorganization that is underway. And so, we already have all the tools in the field that will allow us, regarding reporting requests, things that are in progress, to rather say, well, in fact, we have such a tool, we will be able to do it and if you want, here we treated the team level. Earlier we talked about the system level, so in the tribe, what we do, the same type of report, what are the metrics for several teams, well today, we will be able to go further in the framework and build this new this this, well, this new axis of the framework, we will be able to put it in place and co-construct it with the head of and the engineering managers.
[00:39:53]
And the last one, anchor the new approaches. Today, the objective, uh, is that any new EM who arrives in the company, uh, the EMs through the guild and the ambassadors train the other EMs to use this approach which is now well known in the company and with a lot of people who can uh, be a bit the buddies of the new EMs who would have to do this work. So that's a bit the approach, the eight steps of Kotter. It's something you can look at.
[00:40:28]
Uh, afterwards if you have to make a change in your companies, think of this gentleman.
[00:39:52]
manager. And the last one, we create new approaches. Today, the objective is that any new EM that arrives in the company, the EMs, through the guild and the ambassadors, train the other EMs to use this approach, which is now well known in the company, and with plenty of people who can be a bit of a buddy to the new EMs who would have to do this work. So that's a little bit about the approach, the eight steps by Kotter. It's something you can look at later if you have a change to make in your companies, think of this gentleman.
[00:40:38]
So, uh, we're supposed to finish here, uh, but we told ourselves that if we had time, we would talk a bit about a spoiler of what we intend to do next. Uh, so, always following the work of Nicole Forsgren. Uh, in 2023, so there was a new article, a bit of a follow-up to this work. which is called Devex. So it's another framework. Uh, the idea of this framework is to say, "What's the point of measuring productivity?" Uh, if we improve, if we manage to measure the developer experience and we manage to improve it, we will gain in productivity. That's the main idea. Uh, so they proposed a framework a bit similar to this one, so it's based on three axes: uh, the state of the flow, the feedback loop, uh, and then the cognitive load. Uh, always with a bit of the same logic. So we have different axes and then uh we have some examples of metrics uh so for these axes. Uh, for example, the time needed for the execution of the CI. for the feedback loop, uh, the satisfaction compared to the time needed to put into production, etc. So it's very oriented towards dev experience. Uh, the novelty is that for once, when we talk about dev experience, we are not only talking about tooling, or satisfaction with the tooling, with the engineering system, etc. We are really going further. So we're going to talk about flow, we're going to talk about feedback loop, we're going to talk about cognitive load, etcetera. Uh, in this idea of improving the developer experience. Uh, so we saw this in a slightly different way, uh, we plan to draw inspiration from these works to continue on what we are doing with Space. So we're going to try to use this framework, to extract the right metrics, uh, for dev experience, to integrate them into our internal framework that is based on, on Space. to be able to measure this, uh, so this developer experience, in the right way, uh, and then also use it in the teams, uh, to always for continuous improvement. So, uh, that's going to be a bit of the follow-up of the work we're going to do on this site.
[00:43:08]
It's really over.
[00:43:13]
It was just for the microphone.
[00:43:18]
Are there any questions? Ah, there are.
[00:43:22]
up there, up there.
[00:43:25]
Each team has different metrics, but for the same metric, is it measured in the same way, technically speaking, across the different teams?
[00:43:36]
In general, there are, because it's still the same company and the same system, it looks like it, but it's absolutely not a will, for the reasons we explained, not to try to compare the teams, we don't have a specific preconception about that. Avoid doing the same work twice, if possible, that's rather good for once.
[00:44:00]
Yes.
[00:44:01]
Thank you for this presentation. Uh, you mentioned 40 teams initially, you started with five. Where are you today?
[00:44:09]
And how long has it been?
[00:44:11]
So, uh, with the, in all transparency, the reorg that we put in place recently has put a bit of a pause on the subject. Today, I think we have teams that have done the workshops, I would say around a dozen, approximately, who have done the workshops, who have started to write things. Uh, however, in the coming weeks and months, the objective is to reach all teams, and in any case, all teams are aware, all teams, or at least the vast majority, have validated and reviewed the work of others, and so, they are waiting, they are already trained. Uh, it's more a matter of time, and then once it becomes a bit mandatory, we'll get to it. It's often like that, but in any case, all the indicators tell us that it will be in a good mood.
[00:45:05]
Loic, you can maybe. He nodded.
[00:45:10]
And how long has it been?
[00:45:14]
It's been about a year since we started the process. So it took time at the beginning to set it up and start the change management. Uh, so it started to be implemented about, well, in any case, it started to accelerate a bit about 6 months ago. Uh, and then it was a bit put on hold, as Clement said, following the current reorg that keeps people busy on quite a few topics. There you go, about a quarter. So, uh, we intend to continue now that things are starting to fall into place a bit. In any case, we had a lot of, I myself led a lot of bootstrap workshops where teams continued afterwards to produce reports, etc., others stopped there because, well, they moved on to something else. Uh, well, so we're a bit in a scaling phase, if we can say so.
[00:46:09]
Yes, thank you. Uh, after having set up this measurement framework, what are the benefits you have observed in the teams, whether it's in terms of management or the well-being of employees?
[00:46:21]
Already, a collaborative approach. So when you give the floor to developers, generally they are more productive, happier, uh, and it works better, strangely enough. Uh, so, we don't have false information, we don't have false indicators, we don't have watermelon indicators, etc., already. So, it also makes a tool for certain teams that report this to us because they often have time to do workshops and all that, they meet quite regularly, we call it in-days, scale and so on. So it also gives them workshops to set up, to do in their catalog of what they could do during that time, which is dedicated to continuous improvement. Uh, and so, uh, but also that, uh, certain, these measurement topics were often led by the EM, by the manager, whereas with this approach, it involves the developers themselves more. So there's a group effort that's put in place, uh, so it goes faster, uh, and uh the feedbacks are more interesting. Uh, so the measures are also more interesting.
[00:47:28]
From my side, I also saw this, uh, engineering managers who were especially happy to have a tool that helped them better integrate this activity into the team's life, that activity. which was more solitary before because they didn't necessarily have the means and the tools to share and co-build. Uh, now they have things a bit ready-made. workshops, bootcamps, animation by a third party, which means that they can do this activity more easily, and what's more, as the developers are generally quite happy to do it, it's it's super uh virtuous.
[00:48:11]
Uh, I wanted to know if you had, as a result, any, well, metrics or teams where the metrics had dramatically improved. Uh, if you have any stories to tell us about that.
[00:48:27]
Uh, we have some, I'm looking for something that would be easy to tell. I think the most precise example, I don't think he's here today, but he's in the conference. we have one of our colleagues who had just taken over a new team, uh, and who, at the same time as, in this case, the transition of a team to a Kanban flow, uh, quite disciplined, was able to put in place, well, write down all the evolution and improvement of his flow. And so, for people who know Kanban well, he was able to play all the trains of his lead time, his cycle time, explain the cleaning of his backlog and the sanitation of his backlog, and he did it through this document. So, he really has a document from the first quarter and all that his team had committed and done, and that's a real measure of performance that everyone likes, with +80%, things like that. Uh, but so, it had a real, real, it was a real tool to write down the results and highlight the results. I don't know if you see other examples.
[00:49:21]
Uh, as an example, but I was looking at Loïc just earlier because he's one of our engineering managers with whom we had started all this too. If you wanted to share something about that. where you want.
[00:49:55]
Hello everyone. Uh, as an engineering manager, so I was one of the early adopters. For me, it had many positive aspects in my team. Already, it was about allowing the team to appropriate things that it generally sees as something negative, a monitoring element of performance or productivity, which is the bad word, generally. Uh, sometimes it also allows us to confirm things that weren't seen, that is, uh, we, for example, had looked at deployment frequency, for example, and we realized from the first iteration that we were elite, in fact. So we said, "Well, that's cool."
[00:50:32]
Plus, we immediately wanted to implement a refactoring of our CI/CD, and we tried to maintain the same level of indicators. So again, it allowed us to make improvements. uh, see if before and after we had stayed at the same level of, of performance. There's another team too, it was code review.
[00:50:50]
Uh, they were wondering about their review time. And so again, very quickly, in six months, I think, they saw a clear improvement in their review time and the effectiveness of their code review, of PRs. I think it's in another conference, we saw that it could cost a lot of time and money, so. So there were a lot of experiences in each team, individually in each team, indicators that we wanted to monitor and that we wanted to improve. And what's good is that afterwards we increase our corpus of indicators, we only publish certain indicators externally, as it was said there. But we continue to keep an eye on all of our indicators, but sometimes it will be the performance of our engineering system, the performance of our tests. Here's a whole set of indicators that developers, that the team appropriates and shares in retros, etcetera, etcetera. There, I don't know if that answers the question.
[00:51:54]
Yes.
[00:51:56]
Uh, thank you for the presentation. I'm here.
[00:52:01]
Uh, have you identified a correlation between the evolution of team performance and the value or benefits generated by the company?
[00:52:12]
Uh, well, I would say it's too early. I don't know if Clement agrees. I would say it's too early. Uh, given, in any case, the implementation, even if we started a year ago, but there were periods of interruption and there was change management. So I would say it's a bit early for now, but it's going to be very interesting to measure. uh, afterwards.
[00:52:38]
Hello. Thank you for the presentation. Uh, it will echo the previous question a bit. Uh, you explain very well that these are metrics that are not necessarily to be taken in absolute terms, uh, that it depends on the context, that each team will have its own metrics. How do we answer Agathe, the CEO, at the beginning, that she may still need to know if all these hirings have an impact on the ROI? Uh, have you had any feedback from C-levels on this already?
[00:53:06]
Uh, so we say, uh, especially, and we may not have said it clearly, however, we encourage everyone to measure the trends in their operation. And we are in a company that is 20 years old with hyper-growth not long ago, but we have teams that are very recent and then teams that have several years, even dozens, well, even a dozen years. And depending on the contexts, we have cases where we need to control retention.
[00:53:40]
uh, that people stop leaving because uh there are more interesting technologies elsewhere, etc. We have teams where you have to convince on products that don't necessarily have the same interest.
[00:53:54]
Uh, if we don't open the hood and we don't think about it. And so, it's especially, in any case for the moment, uh, with the Head of and the technical directors, we manage to create an argument. Uh, because, precisely, we have teams that, in their context, had problems and they are starting to solve them, they are starting to rehire when they couldn't hire anymore. So they have something to tell that makes there a clear evolution. And so that, we manage to push it up, we manage to co-build it once, well, there's a more direct demand, we can reread all that and build a new system argument.
[00:54:37]
Even if today we don't have the trick yet. So it's rather like that in any case that I see how to answer you. It's rather in this type of action.
[00:54:47]
Hello, thank you. Uh, I have a question about how you determine the frequency of measurement, and do you reject all indicators that cannot be automated?
[00:54:57]
Uh, we absolutely do not reject indicators that are not automatable, we have many that are surveys in fact. So everything that is, uh, well-being, so everything that is satisfaction, etc. These will be surveys, so very little automatable in reality. Uh, what was the first question?
[00:55:20]
The frequency.
[00:55:21]
The frequency, yes. The frequency, we recommend doing it once per quarter. But the teams are free to do it as frequently as they wish. So if you.
[00:55:31]
Even?
[00:55:32]
Yes, uh, but after, the implementation of the measures is once per quarter, but the monitoring of the evolution of the metrics is rather by sprint or by week, yes.
[00:55:43]
of report to.
[00:55:48]
There you go, it will also depend a lot on the measure itself. In any case, there are no directives on this, it's the teams, it's up to the teams.
[00:55:58]
Hello, thank you. My question is a bit related to the previous one: for metrics that are not automatic, how long does it take the developer, what is the necessary workload to precisely fill the template you showed us?
[00:56:13]
Uh, it depends, once again, a lot on the measurement. Generally, we can, generally we propose surveys that are very short.
[00:56:22]
Teams can have their own too, they can show them to us. When they have their own, we give them an opinion, generally if it's too long, we tell them it's too long, it takes too much time from the developer. The idea is that it should be quick and easy to fill out.
[00:56:37]
I often encourage, and have for a very long time, people not to necessarily automate everything right away. And I'll give you a precise example.
[00:56:47]
There are plenty of cases where you need to have the information once a week, and between trying to automate it and having, I don't know where, in a non-API-based Jira dashboard, the information, and going to get it and report it in a small table once a week, it takes you 5 seconds once a week, and so, frankly, that, or trying to find out how it was produced and to generate it, there's often too much effort to make to finally automate it.
[00:57:12]
So we have plenty of cases, the most obvious case, or the one I use most often, is the calculation of.
[00:57:28]
Uh, in Jira, of your flow and your cycle time, uh, or your lead time, it depends on which column you take. Uh, you can get it very quickly in the analysis of your flow by dashboards. On the other hand, if you have to calculate it yourself, or get it back, the API doesn't allow you to, and if you recalculate it, you have to reinvent the dashboard. So typically, it's the kind of thing where you take your data, you take it back once a week and then it's done.
[00:58:05]
Did you already have, among these 40 teams and all the management, a bit of a measurement culture that was already in place, the idea of measuring what we do? Or is it something you had to push for it to take hold?
[00:58:21]
Uh, yes and no. Yes, well, no, because culturally, uh, and regarding this hyper-growth and the fact that, well, I often say that a hyper-growth company is like riding a galloping horse without a saddle, and then we move forward. So we didn't have that culturally. However, we recruited many senior engineering managers who came with a background and the skills to do it. And so we also have a lot of people who today are not completely in the model. but who had something very, very similar and with whom we looked at each other without any concern saying, "Well, the day we need to, for homogeneity, go there, we'll go, but I have my slides or I have my file, I have my dashboard and globally I do pretty much the same thing." So we have two, we have a great autonomy between the teams and we have many teams with a real experience and experienced people who lead their teams with this culture. I just passed the microphone, but I'll take the last one.
[00:59:31]
Thank you. A question a bit similar to the question that was asked just before. Uh, how do you show that the teams are progressing? Because the metrics are not the same. There are some metrics that are hidden. So when we have to show, for example, to the board, etc., that the teams are progressing, what is formalized, uh, well, over time, that is, in terms of dashboards, in terms of consolidation?
[00:59:56]
So that's what will be provided in the reports. Okay? So in the reports, we will have objectives for team improvement. So that gives information about their maturity, about the subjects. It also gives information about where they are, what their current performance is, if we want. Uh, and so the indicators that will be in the reports with the right context will give this additional information to say, "Okay, so this team has gaps here, it needs to improve on such a thing, etc." Uh, without necessarily going into the ease of comparing numbers without really having the context and the constraints of each team.
[01:00:35]
and therefore would ultimately give a comparison that is false, already from the start, uh, but also simply information that is not the right one because it's a number without the context that goes with it, without the constraints and the explanations, it means everything and nothing, in reality. So it's not easy, in fact, to give this information simply to say whether the team is progressing or not. But, uh, by following the team with the reading of these reports on a fairly regular basis, we know where it starts, where it is today, what measures it uses, etc., and so it can also lead managers to make the right decisions, and that's what is ultimately useful, in fact.
[01:01:22]
And I would add that it also allowed us, I don't know who I'm looking at for the question, but, uh, it also allowed us to reverse the trend a bit and to rather say, "Well, there's a problem, there's a doubt about a team, but come and discuss it with us." And so, instead of wanting to absolutely push the information, certify and convince everyone that we have the information, and that if they want to come and help us do better, they can come and we'll pull out all the data and we'll look at it with them. Uh, it's also a change that we're trying to instill and which is generally rather well received. Uh, because they know that there will be a real discussion based on facts and data.
[01:02:08]
Okay, thank you. I'm sorry for the second question, but we have to stop.