Elodie Mermet Chris Dupin
Transcript (Translated)
[00:00:00]
Hello everyone, uh, sorry, we had to make a small technical adaptation because the room doesn't like Macs, but uh, it's okay, we'll get there. Uh, so we switched to Linux, which is a bit uh new uh for me, you'll understand why later. Uh, so we're going to start already by uh doing a quick overview all together. So uh, we're going to ask you uh to be uh to be active for the next three minutes. We're going to give you a few statements uh with Chris and uh you'll have to raise your hand if you feel concerned and be careful uh don't lie, please. First statement, I have already encountered delays uh in the deployment of my projects. We agree.
[00:00:57]
Uh, second statement, I have already encountered communication problems within my team, or even with other teams. Yes, no problem.
[00:01:08]
Last statement, I was part of a project where the initial design or functional specifications had to be modified a bit at the last minute to put it into production more quickly.
[00:01:22]
I see he's lying in the front row, but no problem. Uh, so indeed, it's more our challenge, and it's a challenge that is rather common. Uh, because we all have in mind the functioning of an optimal agile team or an optimal organization. We try to create a kind of multidisciplinary bubble uh where communication is increased with the objective uh for the success of a precise objective. The problem is that the growth needs of the organization can really undermine these different communication channels and really reduce the effectiveness of these different teams. Because the teams are growing and expanding more and more, their scope is becoming more and more precise, and their need to communicate with other teams is increasing. Because the problem is that organizations, uh humans, software and uh processes do not evolve at all at the same pace. So a question and a challenge arise: how do we ensure that the expectations, needs, and constraints of each person are met, how do we manage to level everyone up uh to get the best out of each discipline, specialty. And how do we use processes and tools without imposing them? In any case, it's a challenge that particularly interests us at Back Market. And that's why today, we wanted to talk to you about how we managed to cross streams, in fact, between the different teams and expertise, uh, to actually, uh, ensure that the different expectations and needs of each person, uh, are met as best as possible.
[00:02:54]
And so, before telling you how we, uh, well, how we solve our challenge, we're going to introduce ourselves. So for those who don't know Back Market, it's uh a marketplace that sells refurbished devices. And our objective is that you no longer have uh any reason to buy new electronic devices. So here are a few figures that represent us. Uh, I'm not going to describe them all. What will interest us today is going to be the more than 700 employees. So you see that we are already a a a great team. And within the team, what's important uh for us is to monitor the CO2 emissions that we managed to avoid uh by allowing, well, to buy refurbished products instead of new products. And so we are quite proud because here we can see that uh each employee, so what we call the backmakers, uh in 2021 contributed to avoiding the emission uh of 628 tons uh of CO2, which is not negligible.
[00:03:47]
And so among the 700 and plus Backmarketers, uh you have two of them in front of you today. Uh so I'm Elodie, uh I'm a Principal Designer at Back Market for a little over 3 years. Uh I work on the customer experience part uh pre-sales and retention. And I started in the after-sales team with Chris.
[00:04:10]
And I'm Chris, I'm a back-end, back-end developer uh from Back Market for a little over 3 years too, and for my part, I've always stayed uh on the after-sales side. And uh what led me to uh to be interested in these these issues is that uh I entered through the door of domain-driven design and how to better communicate uh with the different members of my teams.
[00:04:33]
Uh, let's remember a little bit of context. Uh, so we are in a company that has been really in hypergrowth. So it's going to, we tried to respond to our challenges, that doesn't mean that you have exactly the same ones or the same problems and so on.
[00:04:47]
Uh, as I said, uh, at Back Market, humans, processes and organization really don't evolve at all at the same pace. Uh, which means that as the teams grew, uh, we started to have to talk to more and more people and integrate more and more skills into the different teams. For example, in the after-sales part, uh when I arrived at Back Market, we were a team uh I think of 5 engineers and 10 in total and 2 years later, we were four teams with about 40 people. Which poses many problems of defining ownership between the different teams, and as we add new expertise, in fact, uh, the rules of the game change completely. We don't have at all the same constraints to talk to others, we don't even know the constraints of others finally. And it is also necessary to take into account that each team is completely immutable, that is to say that adding a person or removing one completely modifies the dynamic of the team. So trying to take all this into account is quite difficult and complicated. Also, not all teams necessarily have the same level of maturity because as I said, each time we add or remove a person, in fact, we dilute or sometimes lose a part of the product knowledge and the different processes and our way of working.
[00:06:03]
Uh also, all the one of the symptoms that it created is that we see, we observed in fact, unrespected deadlines and especially a lot of frustration. So frustration, that, what's it called, it emanates in many different ways. But we have frustration on the business side because they don't have their their objectives, nor the functional needs that are met on time. Engineering frustration because we have the impression of not building the right thing, neither at the right time, nor as we would like. Design side frustration because they always try to propose an ideal experience for the user, they always find themselves in the end with a completely degraded experience. So it costs me scissors that you must know. In any case, it's quite a lot of, so these are some problems we encounter, uh, which is that often, in fact, expectations are not clear. each of these people, whether it's business, engineering, design, etc., have their own expectations and constraints. For example, sometimes, Elodie comes and asks me, uh, can you ask me, can you tell me how long it would take to implement that? The problem is that I'm on another project, so I don't have time to do an in-depth analysis at all, and so I tell her, oh yes, it's done in a week, no problem. Except that when we get to it, well, all of a sudden, it's not a week, it was a month in fact. So it's, well, we don't necessarily have the same expectations at the same time. And also the iterations grow because as we add skills and we have more and more constraints. Uh, we don't necessarily take them into account at the beginning, in fact, in the management of our projects and the products we want to release. An example I have in mind here is that we are present in 17 or 18 countries, uh, that means that each time we release a new feature, we are obliged uh to try to translate for all the languages in which we are open. Except that from a uh completely back-end engineer point of view, it's just a translation key with three pieces of information. We don't necessarily realize that behind it there are 3 weeks of work to validate, write all these translations so that they are validated and so on, it takes a lot of time. Similarly, if a minor design change, in fact, in one of the screens, can uh trigger communications with the brand team, so the one that manages the visual brand, the visual identity of the site, but also uh the team in charge of all the common components, front-end and so on, and a minor change can trigger enormous conversations. And also, and that's what I like the most, is that we have a completely different language, that is to say that the same concept can have entirely different definitions depending on the persona, or even the context, or even the bounding context, if I dare say, depending on the person we are talking to.
[00:08:39]
One of the symptoms of that, which I myself hate, is asking the product, "How is this supposed to work?" and he answers using always more acronyms that I don't understand or synonyms and I, I don't understand what he's talking about.
[00:08:52]
Uh, and we even realized that ultimately, even a word and a concept as standard as MVP for most valuable product, had a completely different meaning depending on the different personas and expertise.
[00:09:06]
However, because not everything is black anyway. Uh, we have teams at Back Market uh who have managed to deliver projects uh that went well, we have processes that work pretty well and so on. The thing is that by growing, uh the knowledge and know-how in general are already present, but by growing, it's the way to access them that will completely change. Because occasionally, we have processes used by teams uh that have led to successes. And also, it must be seen that some of these teams have managed to make these communication transitions with other teams by giving ownership and so on. So, if we know that the know-how and the different skills are actually there, the question is how do we retrieve these knowledge, processes, and how do we transmit them to the greatest number? One thing we'd like to do is, for example, retrieve how the payment team does AB testing, because they're really good at it, we'd really like to know how they do it. To know how uh the after-sales team manages to make functional specifications.
[00:10:09]
that we do very well.
[00:10:11]
And uh also how do they do it in other teams, for example, for catalogue management, how do they do it uh to co-design between the different front-end engineers. And uh and product and product product design, how do we how could we do it?
[00:10:26]
Well, uh, to solve our challenge, we decided to create a playbook that we called the Solution Discovery Playbook. So it's a set of ready-to-use processes that will help teams implement good practices for the proper conduct of their projects. So it's called the Solution Discovery Playbook, uh, because it starts from the moment we have already defined uh the problem to solve, the business opportunity, so we have prioritized that uh in our roadmap, and it stops for now until the MVP is implemented. Later, it will certainly evolve. So here is an overview of what you could have in our toolbox and so in green, you have in the green rectangles, you have uh the different macro steps of a project and under each of them, you have the processes that we have listed uh for the moment in our playbook. And so we're going to focus on one of the processes, the one at the bottom, which is to define uh the specifications and priorities in a product requirement document, so PRD. Uh so what you see on this process is that it's not unique, well, it doesn't belong to one stage of a project, but it will really follow uh the entire life of the project. So the PRD for us is a document uh on confluence uh which has the different sections, so the project objectives, uh its success criteria, uh the functional needs which at the beginning will be very, very macro. So when we design the flow design, it will really be uh a few entries on a table uh barely in the form of user stories. And as the project progresses, uh so these user stories will actually be detailed. So what also makes us spread the load of functional specification a bit and everything is not to be done at once. Uh we also have what's out of scope, very important to uh stop the debates uh as soon as, well, someone else joins the project, a new business stakeholder who will want to add perhaps something extra. We log on this document uh why it's out of scope and that it will certainly be part of a next iteration. Also, what's key is to uh have the history of questions and decisions that were made during the project. Because on very short-term projects, this section may be a little less useful, but on projects that are a bit longer-term or for which we will have to, well, we always iterate, but when we do the iteration, we will come back to this PRD and we will understand why we made that decision at that moment and it also allows us to settle uh certain debates that might arise later in the project. So we already have good feedback uh on the adoption uh of the PRD. So what's good is that it generates debates upstream of the project, instead of these debates happening too late and us realizing that there are functional complexities that we would not have identified that would create uh redesign or adaptation of specifications. And so it really allows to align all the people involved in the project, so even the business stakeholders, so marketing, ops, etc., so that they can also see how we are progressing, the decisions that have been made and to be finally ambassadors of this project.
[00:13:44]
So how did we go about creating this playbook? So we're going to share our recipe with you, uh, it can of course also be adapted to your organization. So for now, it's worked pretty well for us. Uh, so already, what's very important is to consider the playbook as a project in its own right. Uh, otherwise, it will always be what will be deprioritized and therefore ultimately it will not exist. Uh, so like any project, define its objective. So for us, it's uh to reduce rework and therefore uh to reduce uh delays.
[00:14:16]
Like any project, you create a project team and so we have a casting of four sub-groups. Uh, the first group is the Ops Team uh which will be responsible for facilitating the implementation of the playbook and monitoring its adoption. Uh the Working Group is composed of several expertises, so here you see there are about ten people, so as expertise we have content designers,
[00:14:40]
backend engineers,
[00:14:42]
product designers,
[00:14:43]
mobile engineers,
[00:14:45]
product manager,
[00:14:46]
front-end engineers.
[00:14:47]
and we even had quality engineers.
[00:14:48]
Oh, you stole it from me.
[00:14:49]
There you go. Uh, and so this working group will actually be responsible for prioritizing and writing the playbook processes. Then, third, we have the feedback group which has roughly the same composition as the working group, which will re-read the written processes and uh make a first pass uh to improve them. And then we have the sponsors, not to be neglected either, who will really be there to help with the adoption of the playbook. So the sponsors are more C-levels like CPO and also CTO.
[00:15:21]
Like any project, there is a planning, so the playbook has no end date because we don't want our documentation to become obsolete, it will continuously improve depending on the change in the team structure of Back Market. But on the other hand, don't hesitate to set milestones at each, well, don't hesitate to set milestones and therefore dates on these milestones. So here is the planning we had, so last year we started with the initial assessment, then the writing of the processes, having a first feedback phase on these processes and then presentation of the playbook and improvement and adoption of the playbook. So we are still in the improvement and adoption phase, we will certainly be in this phase for a very long time, even if we will incorporate other milestones with uh, well, by iterating, by adding other processes. It may seem long, uh, but tell yourselves that it's not our main task within Back Market, it's a transversal initiative.
[00:16:18]
So Chris, can you tell me how we did the inventory?
[00:16:21]
Well, with pleasure. Uh, first of all, we took as a starting point to make a big chronological timeline with all the steps uh that we had in mind for the realization of a project. Uh, when do we prioritize the different functionalities we're going to add? What are these features? When do we align on the MVP? What does that, what does that mean? Uh, when do we want to write user stories and on what do we base them? When do we want to deploy, how do we want to deploy? Do we want to deploy first in France, then in other countries and so on and so on. So we really put all that on a big chronological timeline and once we had all these steps more or less in order, uh we tried to list in fact all the small problems we encountered, the avenues for improvement we had and so on. So don't try to read, it's more to give you an idea of what it might look like. And in fact here in purple, we have in fact a step, so for example you have prioritization workshop and mapping dependency. In red, you have a problem or a frustration, uh so all this will be reported by the working group. Saying, for example, I'm a content designer and I find that we have this problem, I'm a mobile engineer and we have this problem here and so on and so forth. In green, we have processes or tools that work quite well, where people are actually happy to use them and everything, it's also interesting. So what we are trying to do is also to recover all the good ways of doing things and everything, so it's quite important to also recover that.
[00:17:46]
Uh after in blue, we also have all the all the ideas in fact of improvement, of tools that we could use, things we would like to change. Or just plainly, everyone comes to express in fact the expectations and the different needs they have. Like, I'm someone who works on the content design side, I need to do translations, I need to know, for example, what are all the translation keys so that I can do things correctly with what are the different variables and so on.
[00:18:13]
To take an example here, uh is that we reported that, well, the engineers in fact reported that the MVPs were often much too big uh from their point of view. Because, each time, the engineers proposed to cut this MVP into even more parts, so it was the MVP within the MVP. Their goal, from the engineer's point of view, was to be able to deliver much more often and therefore to reduce the risk on the project, validate assumptions much faster and so on. It certainly comes from a good intention. The problem is that it then generates a lot of frustration and rework on the design side. Because there is information that has been removed, so it is necessary to reorder the different pages, maybe there are even entire pages that disappear, it is necessary to adapt their content a little so that it makes sense as the delivery of these small increments progresses. And this work can even have an impact on the content design side, so those who really use the different words used because it's the same, it's necessary to redo if the chosen one has changed, they too must change. So that generates a lot, a lot of frustration here. So why in this specific case, as I told you, we didn't agree on what an MVP ultimately means and we were really not aligned on how we prioritized the different functional requirements. And one of the biggest sources of this problem was that the tech part was engaged too late in the definition of this MVP, or even at the end. So all uh all subsequent modifications changed all the work that was done upstream, which was frankly not ideal and generates a lot of frustration. So, one of the suggestions we made to try to correct this, uh, was to use product requirement documents so that we align everyone, tech included, and the different stakeholders on the project scope, what goes in, what doesn't go in, and so on and so forth. And also to validate the fact that it's okay to start with uncertainty, uh open questions that haven't really been answered yet and so on, but at least we do it voluntarily.
[00:19:36]
tech was engaged too late in the definition of its MVP, or even at the end. So, any modification afterwards would actually change all the work that had been done upstream, which was frankly not ideal and generates a lot of frustration. So, one of the suggestions we made to try and correct that was to use product requirement documents, so that we align everyone, tech included, and the different stakeholders on the scope of the project, what goes into it. What doesn't go into it, etcetera, etcetera. And also to validate the fact that it's okay to start with uncertainty, uh, open questions that are really not yet answered, etcetera, but at least we do it voluntarily.
[00:20:21]
Another example is that the different dependencies, in fact, between the different teams and products, are discovered too late, or even worse, on the fly, when we are in the process of doing the thing. Which forces us to re-communicate the project's goal to the different teams, to try to convince them to modify their roadmap. It's always a part of pleasure, they never really do it with pleasure because they have their own priorities and what is important for us is not necessarily for them at all. So, having the tech discovery at the same time as the product discovery allows us to show and directly highlight the different dependencies between the teams on what we want to implement. And it's interesting because it allows us to negotiate and even abandon or change requirements if it allows us, for example, to remove a dependency with another team. So it allows us to iterate faster and to be able to iterate faster and chain.
[00:21:18]
Well, I've given you two examples, but I still wanted to show you roughly what it looks like in real life. And well, I think there are some missing and we focused on, let's say, certain big steps and not absolutely everything. There were many, many suggestions, many points of frustration raised, but also many ideas and also many processes that are already there in some teams and that work well. But well, there are many, which brings us to the question of prioritization.
[00:21:47]
How do we do to move forward from now on?
[00:21:52]
Well, first, it seems obvious, but we're going to keep what works. So, some teams reported that it worked well when the functional specifications were established earlier in the discovery phase. Others said that the technical investigation also took place earlier in their teams, so it was anticipated and planned upstream, and it happened almost in parallel with the design phase. Other teams also reported that there were rapid design review sessions with the entire squad, but more frequently, which allowed them to avoid creating little monsters, in fact. And to realize too late that they had to make those famous cuts. Uh.
[00:22:34]
Then, well, we look at what works less and what generates the most delays. So here is the top 3, we're going to say, of what worked less. It may seem obvious when we present it. It's just that sometimes, when we're in the run, when a team grows fast, well, unfortunately, we end up with things that seem basic to us, but that are unfortunately experienced in real life. So, uh, well, the MVPs that we thought we had created were perhaps not so simple, so even if the engineers were involved, they were not involved enough to really look in detail at what was complex. Uh, what didn't work was when we didn't work between product, well, with the product and engineering at the same time on the project. And what also didn't work was when there was a lack of synchronization of all stakeholders, so the business part too, which was more upstream of the project and at the end, and thus created certain debates also at the end of the project and so, well, sometimes it was necessary to change a little what had been designed. And so after this assessment, we see that we still have quite a few things to improve. Well, that's good, we have to see it as an opportunity. Uh, but you have to start somewhere, so we made an MVP of the playbook. So the working group selected the main steps to improve as a priority. And so, what comes out of it is that we must first improve the prioritization of features even before starting the project. And so that's something we're going to do thanks to the PRD. Then, uh, the creation of the user flow must be more collaborative, because from that phase, we can identify if we start creating monsters. Same for the co-design phase, with more frequent rituals with the engineers. And to see how we can change the sequencing of this technical investigation that arrived too late. And so after that, the working group divided into different subgroups, each took one of the steps to improve, and wrote the processes. So, before writing the processes, we agreed on certain principles. So here we have, well, we have four principles. So the first is accessibility. It is very, very important that each process is understood by all members of a project team. So we avoid acronyms, far-fetched words, or at a minimum, we define them in the process.
[00:25:03]
Second principle, each process must be independent. So the idea is that each team picks one of the processes from the playbook that will help them perform better. Our objective was really not, and it still isn't, to create a method that is really to be applied strictly from A to Z. Already, it will be very complicated for adoption, because we know that we are human and that adoption, well, adaptation to change is complicated. And what's more, the teams weren't doing everything wrong. So there are things that worked for them, the goal is that they keep what works and that they pick in this toolbox what allows them to improve. And so it's also easier to implement for each of the teams. Third principle, each process must be actionable. So we made sure that the writing of the processes could be scanned very quickly. And associate templates with it, so whether it's Confluence templates to adopt it more quickly, we even made templates on Figjam for workshops, uh, I think design review templates recently too.
[00:26:05]
And the last principle is that the processes must be uniform, so they respect the same format. If in a documentation you have to change your reading grid for each of the processes, you won't want to read them at all. So, we have set up a common format for each process. Uh, so here you have the PRD process that I was talking about at the beginning. And so, on this process, we have a quick definition. When to apply it, uh, when not to apply it, because it's not necessarily relevant for all projects. How to do it, and so here again, we don't write 3,000 paragraphs, we really go in bullet point mode with tips, links to templates, examples of people who have already applied it. We also include the success criteria of the process. And we finish by keeping a history of the process improvement. It may seem trivial, but in reality it's very important because it will encourage people who will use the processes to be proactive and to participate in the continuous improvement of the process.
[00:27:09]
And so Chris, how is the adoption going?
[00:27:13]
So, once the working group has written all these different processes, we want them to be adopted by as many people as possible, and what's more, it's quite important that we don't try to impose them. So, we really want these processes to be of quality. So one of the first steps we took for that is that we used the feedback group to come and reread all these processes, ask questions, align the terminology, say for example, ah, you're talking about product owner on this page, but product manager on the other, there's something weird, try to remove the jargon in about everywhere, and really make sure that in fact it meets the different criteria of accessibility, independence, actionability and that it is uniform.
[00:27:51]
And uh really make sure that in fact it meets the different criteria of accessibility, independence, actionability and that it is uniform. The goal of having these re-readings, moreover, by someone who hasn't written them, is that they don't have their head in the handlebars, they have a really external point of view, and moreover, since the working group is composed of people who come from completely different expertise, sometimes we can have points of view and points of view and ways to improve these processes quite effectively. How did we do it? Well, for us it was really organic, that is to say that it was done through different comments in Confluence, we had common rereadings from time to time, sometimes it was done synchronously, sometimes asynchronously, etcetera, etcetera. In fact, everyone did a little as they wanted, and it worked pretty well. The only really important point is that you really have to give deadlines and dates for the completion of these of the rereading of these processes, to make sure that, well, as we said, it's something we did a whole project, but we don't only have that to do either, so try to give dates so that we can continue to move forward, it's really important.
[00:28:55]
And don't hesitate to remind people, because it will certainly be at the bottom of their priority, so gentle harassment works well.
[00:29:06]
And after that there is a step that is really crucial, it's once it has been reviewed by this feedback group, it's that we have to try to promote the Solution Discovery playbook to try to gain a little more traction. So, the fact of having recovered a large part of the processes from the teams that already used them helps a lot in that. Uh, because we have fewer people, quotation marks, to convince. And but we still have to try to find teams that are voluntary to test and give feedback on the different processes. And there, in fact, it's the role of the operational team to push and communicate about the Solution Discovery Playbook. So, in fact, how did we do it? Well, we have big meetings, for example, where all the product managers and all the people related to the product, in fact, we go through it and we present it to them, we do the same on the engineering side, etcetera, etcetera.
[00:29:56]
We also create dedicated meetings, sometimes with product managers and engineering managers to present them the solution discovery playbook, the different processes and tools that we have in place, that they can pick if they need them, etcetera. We also come here to look for projects that could be good candidates, etcetera, and also, above all, we try to, in fact, we offer our help via process helpers by saying, well,
[00:30:21]
you want to use the PRD, well, I'm here, you can contact me anytime to give you a brief overview of how it works. Show you two or three other examples in other teams, unblock you if you seem to be stuck and especially just simply reassure and reassure people, uh, that's it. And once we have a lot of teams ready to test, we give them the freedom, as I said, to take the processes that interest them, that they want, that they want to take. One of the only trick we ask, in fact, is that they measure the number of reworks, of small changes they had to make, in fact, as the project progressed. Uh, that's the KPI we use to try to see if a project is going well or not. Uh, and so there we have several teams that are starting to test these different processes, etcetera, etcetera.
[00:31:11]
And once it's good, and they've been able to provide their feedback, which has been taken into account, and so in the section that Elodie mentioned, what's important is to see that indeed everyone can participate, propose their improvements, or even say, ah, be careful, if you are in this case, it's perhaps not very good to use this process, etcetera, etcetera. And so there, the different teams, as they go along, more broadly, actually come to use these processes, the processes that interest them, and we really start to have a lot more feedback and to see more and more returns, and we try to improve these different processes as we go along. So there, that's the part that theoretically never ends, uh, and that's good. But you also have to see a slightly difficult point, which is that projects are often part of a more or less long-term vision, so you always have to find the right opportunities to say, ah yes, but in three months you're starting this project, maybe you can start that, etcetera. So being able to really have feedback, let's say, and quantified metrics as we go along for the moment, it's a bit complicated to have that. On the other hand, we are starting to really have good feedback and hear good positive feedback, people who are happy to have worked on this project because it was very clear, happy to have done co-design sessions with several members because it was very clear, it improved them a lot, etcetera. So people generally are really happy when they use these processes and in fact they say so, that's cool, even if it's less, more difficult to quantify.
[00:32:36]
And today, uh, well, we have several months of implementation of that. Uh, so Elodie, can you tell me a little more about the benefits?
[00:32:45]
of the project? Yes, yes. Well, the playbook, keep in mind that it's a tool, well, in any case for us, that is very powerful. First, because it allowed to create a feeling of belonging from its creation. It's not a method that came from managers who were a bit too distant, well, it was really created by the people who have their hands in the dirt every day and who carry out the projects. And this feeling of belonging, we also extend it to people who will test the processes, by telling them, well, don't hesitate to add your own processes if you have an idea, or don't hesitate to improve existing processes. Then, this toolbox is powerful because we don't impose a rigorous method. So that's maybe specific to our organization because each team has its own way of working, but when that's the case, it's very important not to arrive with something, a ready-made box and to impose it, and so it also allows each team to improve instead of having a very long change adaptation phase.
[00:33:52]
We don't reinvent the wheel. Most of the processes in the playbook are known in the business, these are processes that have been done sometimes in some teams internally, sometimes during previous experiences, and so it also allows to facilitate the adaptation of teams to these new processes. So it's really a sharing of best practices. It may seem trivial, but when you are a large team of more than, well, several hundred employees, it is still important to have a small common toolbox that you can access at any time. And it also works well because it's done progressively.
[00:34:33]
As we told you, it's not our main task at Back Market. So you also have to be able to tell our managers, well, I'm going to have to dedicate so much time to this project. And so, if you want to release an encyclopedia all at once, uh, the time will be very important and there is a high chance that your manager will tell you, well, Elodie, you're very nice, but such a project is still more important, so we'll do that later. And later equals never. So, go step by step, make a small playbook. Sometimes we had 10 because we had quite a few volunteers in the working group. It can be three processes, test them, see if they show that they helped these teams that tested them, and then add other processes.
[00:35:19]
And well, honestly, Chris, can we say if it works?
[00:35:27]
Well, yes. But it's not that simple, uh, we've mentioned it several times, but adapting to change is something difficult. For everyone.
[00:35:38]
Um, we also really want to emphasize the importance of sponsors at that moment. Having sponsors at C-level who come to push on the product side, like, hey, you should be interested in that, etcetera, it's really very, very important and it helps a lot in the dissemination of this playbook. One of the other advantages of having volunteers in the working group, feedback group, etcetera, is that they also become emissaries in their different teams and according to the different points of contact they have. One of the cases we had, for example, in Car, is that we use PRDs a lot and we got, we had a project with the team that was responsible for sending the phones from point A to point B, and in fact, we created the PRD, they grafted onto it to try to understand what we were doing, and in the end they said, oh yes, indeed, it's very, very clear. So it allows, in fact, to gain a little more traction like that.
[00:36:30]
After also, uh, there are some processes that don't apply every week, uh, so having feedback on them, etcetera, is complicated. So having a real buy-in from all stakeholders, whether it's engineering, etcetera, it can be a bit difficult. Because you also have to arrive at the right time, at the moment they might need it. Uh, and you have to communicate, communicate, communicate, communicate, the existence of this playbook, the fact that it's there, etcetera, help people, say, you have no problem, we are here to help, etcetera. As I said, the feedback loop, it's really, it can be really long depending on the processes, etcetera. Also, as I said at the beginning, some teams don't necessarily have the same maturity. So, seeing the finished result of the application of a process by a team that has been doing it for years, and we want to get started, it can seem extremely intimidating. And so, well, we're afraid to want to apply these new processes, etcetera. So there again, we try to provide help via process helpers, etcetera. And one of the points that we also considered a bit difficult is the difficulty of having a kind of objective metric. One because we wanted a metric that corresponded to all the processes of the playbook. Uh, so we decided to measure the rework. Uh, but really knowing what and how to measure it, it can be quite difficult, especially because we can't really do, let's say, AB testing of there was this process or there wasn't this process on the same project, it's not really possible. So what we can know in the end is, yes, people are happy to have used it, there was little rework. But would there have been more or less if they hadn't used it? Well, it's difficult to know, we can't really know.
[00:38:13]
But overall it works. We have, I think this is something I'm most happy about, is that we have people outside the feedback group, the working group and all organizations of this project, who decided to add one of the processes that they found important and that they liked and that they found worked particularly well at home. So they came to us and said, ah, what you did is good. So the importance of communicating very broadly about the presence and where the Solution Discovery playbook is. And so we integrated them, we helped them to integrate so that they have the same format, how to try to measure, etcetera, etcetera. In terms of adoption, uh, so I counted not long ago, for example, for the PRD process. Today, we count more than thirty PRDs created outside of the teams that were doing it initially. Quite a lot. We had dozens of co-design and design review sessions that were done, etcetera, and in fact, we hear more and more positive feedback from people who are happy to work together, and different parts of, sometimes very wide, up to business, etcetera, etcetera.
[00:39:19]
For example, on one of the projects, we had a decrease in rework because we were able to count that we only had three minor reworks, so we really changed the layout of two labels, etcetera, for a two-month implementation project, so we were quite happy.
[00:39:33]
Yes, because even if we can't compare with, well, to know how it would have happened without the processes, uh, we still have a good feeling that it's quite unusual to have just three minor reworks for for this size of project.
[00:39:05]
co-design and design review sessions that were done, etc. And in fact, we hear more and more positive feedback from people who are happy to work together, and different parts, sometimes very broad, up to the business, etcetera, etcetera. For example, on one of the projects, we had a decrease in rework because we only counted three minor reworks, so we really changed the layout of two labels, etcetera, for a two-month implementation project. So we were pretty happy. Yes, because even if we can't compare with, well, to know how it would have gone without the processes,
[00:39:35]
we still have a good feeling that it's quite unusual to have only three minor reworks for this size of project.
[00:39:51]
Well, if you haven't listened to anything, Elodie, will you give us a recap?
[00:39:54]
So, when I leave this room, it's time to wake up. Here's what you need to remember. So, if you want to implement your collaborative playbook, it's important to consider it as a project. So think about the objectives, the casting, and the planning. Again, the planning will be a bit unusual, but there's no end date. So, however, you need to make a planning for the different milestones. And it's very important that there is no end date because we always want to improve this documentation. Do not neglect the creation of the team. It can be a problem not to have the majority of experts participating in the project, not to have them not represented in the team. So we experienced it by not having product managers. There were more engineers and designers represented in the Obsteam, and it was a small hindrance to the adoption of certain processes. And it's not because our product managers are lazy, it's that they have a lot, a lot of work. But they were part of the working group and feedback group. Above all, do not neglect the communication strategy, and the communication strategy starts at launch and never stops. So, it's very important to remind, as Chris said, that this playbook exists for new arrivals. When there were projects that used processes, try, when they present their project, to indicate, well, we used such and such a process from the playbook. Before the project, communicate, well, on our Slack channel, we will use such and such a process from the playbook, etc. So repeat it constantly, it may seem a bit repetitive. But in the end, it's how it will become a reflex in people's minds. So that's really the heart of the matter, because otherwise it's just nice documentation that's not used, and you've wasted your time.
[00:41:45]
The documentation, once again, it's living, it must be maintained and it must be improved by everyone. So by everyone, there's no need for everyone to improve it, but it is collaborative. It's not just improved by a small group. So it is improved on each of the processes, but also it is improved by adding other processes in the future.
[00:42:13]
And so thank you all for your attention.
[00:42:16]
Okay. Thank you.
[00:42:24]
For questions.
[00:42:26]
Do you have any questions?
[00:42:34]
Thank you for your presentation, for sharing your experience. Uh, so I might have missed the information, but I didn't grasp how the approach was implemented, in fact, what made you say, okay, at one point, we need to go with this playbook project? That's my first question, and I have a second question. Uh, what is a playbook feature? Because I didn't understand either a playbook feature. At one point you talked about features, and I thought, what is a playbook feature? There you go.
[00:43:04]
Okay. Thank you. Uh, so for the launch of the playbook, we didn't mention it, so it wasn't missed. Uh, it's that within the design team, we did a retrospective on the things to improve in our way of working. And what came up was that either certain projects we created were put on hold, or sometimes abandoned, and what we designed often had to be reworked when the implementation started. So, seeing this problem, which was common to several designers, we thought, okay, we'll have to act and talk to our colleagues to find out how we can be more efficient.
[00:43:43]
Uh, and for the features, for example, as a user, I'd like to be able to connect to the site. So that's one of the features, and depending on the different projects we want to bring, we have a list of these features that we'd like to have. Some are more important than others, can have more impact than others. So that's what we mean by feature.
[00:44:02]
Did I answer?
[00:44:03]
the prioritization of features is that? It was about prioritizing functionalities. The objective of the playbook?
[00:44:12]
The objective of the playbook?
[00:44:14]
You were talking about the features of the playbook.
[00:44:18]
I think everyone heard me. But basically, yes, uh, you're talking about, I don't know if you can go back a little bit earlier on your slides. Even earlier, even earlier.
[00:44:41]
Ah, stop. Roughly there, anyway. So, uh, basically, yes, uh, playbook functionality. It's not serious, I, I might be the only one who didn't understand, but...
[00:44:50]
Oh, sorry.
[00:44:52]
Because at one point, you talked about an MVP playbook. So I suppose, in short, I consider a playbook to be a product. And so there are functionalities, and what are the functionalities of a playbook? What is it? That's an example.
[00:45:02]
Well, the MVP is actually, it was the minimum number of processes to launch so that the playbook is effective. So sorry to not have understood.
[00:45:19]
Uh, hello, thank you for sharing. Uh, my question is about, I didn't see a training step. Neither during the implementation, nor during the deployment, nor, uh, nor now. I don't know if there was a need for training on this playbook. And if you train new arrivals, for example? So there you go.
[00:45:43]
I'll answer. Uh, so training, in fact, we mentioned it in the communication strategy. It was more about writing during the onboarding phases that there is this playbook, here are the processes of the playbook, and showing an example. And then we let people read each of the processes, and the process, well, we talked about the process helper in each process, we say, well, you want to put it into practice, contact such and such a person, and so it's this person who will be a bit the coach for the teams to set up the process.
[00:46:16]
Hello, thank you for your presentation.
[00:46:20]
In the case of PRD, in fact, there are also many examples of things that already existed. Uh, so here it's a written document, so it's quite easy, but we can always refer to, ah, okay, there is this section, the way they are writing the different things, etcetera. Also many examples. Because since these are processes that we actually got from bottom-up, we already have application examples.
[00:46:43]
Um... Pardon. Yes, here.
[00:46:48]
Do you think, we are now at the scale of a product for the playbook. Is it possible, in a scaled agile environment, for example, to transfer this playbook to a finer granularity, such as a feature, for example? Uh, over 3 months or 6 months, something like that. Could we have a playbook but on a smaller granularity? And then, on which elements could we play in the playbook to make sure it adapts?
[00:47:19]
Uh, so in the playbooks, in fact, we have a lot of processes that try to correspond to each, let's say, stage of a project. So if we want to write one for how to describe a feature or something, we can do it. And the format we have actually specifies when we can use a certain process and when not to use it. So if we want to be that precise, I would say it's possible. I don't think there are any contraindications, it's just we'll have a lot of, if we're at that granularity, we risk having a lot. Is it serious? I don't think so, in our case I don't think so, given that it's the different teams that come to pick what they need or where they need to improve. So yes, I think we could.
[00:48:03]
So you explained that it was very complicated, once the playbook was in place, to A/B test. Of course, we're not going to do the same thing twice, once with, once without.
[00:48:12]
Nevertheless, I always have a little phrase in mind that says we only improve what we measure. But does a team, before setting up the playbook, at least using it, measure its rework rate? And can you see the impact then on the rework rate not in A/B testing mode, but just in before/after mode?
[00:48:29]
Uh, I don't think we measured it before. On the other hand, we do a lot of retrospectives, in fact, in the different teams, and for example, I went through a lot of different team retrospectives in all the figmas we have. And the problem of communication and rework and so on, it comes up absolutely all the time. Uh, plus, in fact, all these friction points were brought up by people who are on the ground all the time, in fact, in different areas of expertise. So even if we are not able, in fact, to quantify how problematic it is and how much rework we do, we know that at each retrospective, it's brought up by a lot of people. There you go.
[00:49:13]
Or other questions. Otherwise we have a little bonus.
[00:49:18]
Small, huh?
[00:49:19]
Small, very small.
[00:49:20]
There's a question, there's a question.
[00:49:22]
Ah, we're going to prioritize the question.
[00:49:25]
Sorry.
[00:49:27]
Regarding the, so regarding the size of the playbook, the bigger it is, the more difficult it will be, and the less people will want to go and dig into it, because, because we don't want to deal with, well, it's knowledge management. So how do you see the management of this playbook's life and its growth?
[00:49:50]
Yes. So, uh, already, we have already had proposals to create processes that were ultimately very close to what we had written. So we're already trying to see, well, when it's very close, it's actually part of the same process, and there's an option A and an option B if they are really different in their implementation. That way we avoid creating too many processes for something similar. Uh, there's also in the way we're going to present the playbook, to categorize well by project stage, so that each person tells themselves, well, I'm interested in improving this stage. I don't need to go through all the processes of the other stages.
[00:50:31]
And then, in terms of governance, who is in charge of categorizing?
[00:50:36]
Uh, so that's the team.
[00:50:39]
Then we have a methodology, a sequencing of the project steps that is quite common, so it's quite simple, it's a common repository.
[00:50:53]
Uh, for taking into account the feedback that allowed you, after all, to establish this documentation, Is it done in small groups, at the scale of all the professions involved in the company? And then there's a voting system that's set up to move towards the final solution, how does that happen at that point?
[00:51:14]
So, in fact, it's the feedback group, so for the first processes, it was the feedback group that wrote the different feedback. And so each person from the working group who had written the process took note of this feedback and made a selection of, okay, are you sure it's a problem, etc., and so see if we improve the process or, on the contrary, if we keep it as is.
[00:51:40]
Are you going to work as a team?
[00:51:43]
Yes, yes, yes.
[00:51:44]
Okay, okay.
[00:51:47]
So, after, not everyone gives feedback. I mean, they can, but not everyone gives feedback. So for now, it's, it remains a manageable workload.
[00:51:57]
One last question.
[00:52:05]
Hello, thank you for the talk. Uh, one of your initial problems was the presence of EM in the discovery process. Uh, the question is, at what point do you make them intervene? And what role do they have? Does he have a role exclusively as an EM, or is he a lead dev, for example, of a team, or something like that?
[00:52:27]
I'm going to answer. So the problem we had before was that there were especially the IMs who were involved throughout the project. But those who got their hands dirty, so the engineers of each platform, were not sufficiently represented, and that's where we actually discovered other complexities. It's not that the IM had done a bad job. It's just that there were other complexities that were brought up by getting our hands dirty, which now, by involving an engineer from each platform from the beginning of the process, well, we avoid discovering big monsters too late.
[00:53:02]
Well, thank you, and it's over. Yes, thank you.
[00:53:10]
If you are interested in helping us improve this playbook or anything else, do not hesitate to contact us.