Ismaël Héry

Transcript (Translated)

Ok, great. Let's go. Hello, my name is Ismaël Hery. Thank you for being here, I'm very happy to be with you. I hope this will be short enough to leave us a little more time for the Q&A. And I think that, in any case, it's quite short, so don't hesitate to interrupt me if you have any questions or sudden insights during the talk. Go ahead, please.
Very quickly, a few words about me.
We'll get there. I was an Agile and Lean coach until 2011 at a company called Octo. In 2010, I implemented Kanban for activities in Mont, the experience David mentioned yesterday.
And since 2011, I've been working at Le Monde, where I was responsible for developments, then responsible for the project to overhaul the tools used by journalists, the CMS, the back-office. And starting in December, I will be responsible for product and IT. But never mind. What I'm going to talk to you about today is...
When we try to design, create, and operate, to maintain the digital products we make at Le Monde, we face major challenges. We have an execution challenge, we need to get things done, we need to learn a lot. We feel this is a theme that runs through the conference, we heard a lot about it this morning.
So I'm going to share with you this challenge, the fact that we need to learn a lot, the things we put in place to try to make this learning as effective as possible, and then share some illustrations and feedback on these points. Ok, what do we do? I'll go quickly on this, I think you know what we do. Our main product is the free website. We have about 100 million unique visitors. We have a paid site, it's the premium offer, you need to be a subscriber. We have about 70,000 pure digital subscribers.
We make mobile apps. And now, for a month, we've also been doing this,
It's about not letting it fall.
For a month, we've been doing this for a month actually. We developed an in-house tool that allows journalists to write, edit, and correct their content for the web, digital, and print. The idea is that, as in many organizations until now, we had separate newsrooms, a print newsroom and a digital newsroom, working in separate tools, who cordially hated each other. The organizations are coming together, and so we created a tool to support this merger.
So, these are the main digital products that our management and users ask us to implement. Now, some small characteristics of our context. What's special about us? We're not a pure player, we're a company that, until now, printed things on paper, and this will still be the case for a few years, but it will probably stop sooner than we think. So we really have a clash of two cultures: a print journalism culture, a culture of print workers, and a culture of digital journalists and people who make digital products.
I saw the click happen just one or two years ago when these two cultures realized they would really need to benefit from the strengths of the other culture to survive. So the clash is more direct and massive than it used to be. But we still have reflexes, ways of thinking, mindsets that are very, very different. So that's one characteristic. A company culture.
As you know, it's economically very difficult. No one has yet found a solution to maintain viable online or print news businesses. The New York Times or the Guardian, which we often take as examples, are not examples at all; they are not companies that make money. The New York Times decided a few weeks ago to cut 100 journalist positions, and the Guardian is throwing money away on a completely insane scale. They can afford it, good for them. In any case, it's very tough, which explains why we have relatively few resources and strong constraints. Because we can't say, 'We have a small problem, let's double the team or hire 10% more developers.' No, those aren't options available to us.
We use technologies that aren't very satisfactory, we even use buggy technologies, we can say that. By unsatisfactory, I mean things that aren't stable, that are full of bugs. A recent example... iOS 8, I don't know if it was in France too, but I'm sure in the US, the first release of iOS 8, iOS 8.0.1,
it was cellular reception. So you had a smartphone that was super smart, but it wasn't a phone at all. It was over, there was no phone function. I don't know if that happened in France, but that's the case with almost all the components, frameworks, and platforms we use; they're really buggy. By the way, you may remember the Toyota Way principle: use only mature technologies that have proven themselves. So that's not at all what we do.
Another particularity is that most of our projects finish on time, for many reasons. The first is that we have few resources, as I told you earlier. The second is that many of these projects are tied to current events. So there, we don't even need to discuss the cost of delay.
If we're doing a project for an election night, obviously, we tried calling, we were a bit late, we called, 'Hey François, wouldn't you want to postpone the evening, do it in two days, because we have a great animation for the map, but we're not ready yet.' Well, obviously, that's not done. So, constraints that mean projects unfortunately or fortunately finish on time. And we also have a cultural element, an old sea wolf from Le Monde made me understand this. He told me, 'Look at yourselves, every day, even if you're scrambling like many organizations, every day you do something like this and it goes out at 10:30 AM.' That's a little break. Really, I've witnessed this several times. Indeed, we have chaos, we have a lot of things happening. Not very well, people who don't get along very well, but at some point, the clock is ticking, and when it gets close to 10:30 AM, suddenly people understand, get to the point, and the thing comes out on time. We have huge financial penalties if it doesn't come out on time.
Ok, so that was the characteristics of our context that lead us to our problem. What problem are we facing when we try to design and develop these tools?
We're at point A, trying to get to point B. Point B is when our clients, both internal—so management and end users—are satisfied, obviously, when we've respected budget and time constraints. And there's something we often underestimate: the budget and the costs of operation and maintenance. So the ops and developers who fix bugs, if it's way beyond what we planned, we're not at all within the constraints we set at the beginning.
So we're here, trying to get to point B, sorry, there we go, we'll get there, and in fact, we have no idea what happens in between. I was a project manager, so I made plans with 'first we'll do this,' obviously we do agile, so we do iterations, 'first we'll do this, then we'll have a milestone, we'll install the framework, stuff, we'll set up the...' I've written all these things, but in reality, we have no idea what happens in between. There are a lot, a lot, a lot, a lot of uncertainties.
Many questions that, already, we don't know. The question will come up during the project, and on top of that, we have no idea at the start or even a bit later what the right answers to these questions are.
In my job, but I think it must be the case in yours for many of you, let's say there are three main areas of questions. The first area concerns questions about user experience, the problem we're trying to solve, ergonomics, the need. So, a lot, a lot, a lot of unknowns. What exactly does the user want, etc. Then we have questions about development. Will we be able to develop quickly? How long will this feature take? And finally, we have an area that is operations, in French we call it 'prod' or 'infra,' depending, in which we also have a lot, a lot of unknowns.
So first, realizing we have these three areas of questioning. Examples, in product: is this really a user problem? Isn't it something I think is a user problem but the user doesn't actually have? Will what I'm currently developing, with my brilliant ideas, designing, creating a superb design, work once it's in the user's hands? Nothing is less certain.
Between these two features, should we really focus on this one, or rather on that one? Well, an infinity, we could continue like this, you undoubtedly have some on your side, endlessly. Examples of existential questions to which we have no answers at all at the start of a project, on the dev team side.
Should we use this framework, this library, or the other library that has roughly the same features on GitHub or another platform? How long will it take to develop this feature? Do we know how to do automated testing on this technology?
Once I scale up, we have significant scaling issues. When I scale up, my first bottleneck, my first choke point, will likely be the database. It's the database that will fail, the web servers that will fail, it's Redis that will fail. We have a feeling because we've done somewhat similar projects, but in reality, we know nothing.
On the production side, what effort will be necessary to deploy across different environments? When you make your super project plan with the end date, you're making an assumption about that. You tell yourself that it will take roughly this much time to set up the deployment tools. Then, I will be able to deploy roughly every week. In fact, you know nothing until you've done it. What are the main weaknesses of my system? Once in production, what monitoring and alerting tools will I need? You remember, point B is when everything works, the user is satisfied, the system is stable. So before reaching B, you really need to answer this question. What are the fragilities? How will I be alerted?
How many servers will I have to buy? Is it 10? Is it 20? Will I have to go to Amazon because I can't afford all these servers myself?
Okay, so many questions in these three areas to which we don't yet have the answers.
So if I rephrase the situation we're facing a bit, we know that we know nothing, we know that if we want to develop and put into production under good conditions as quickly as possible, we will have to learn very, very quickly about all these questions, in all these areas. So the question ultimately comes down to how can I learn, how can I make my team learn very effectively, very quickly. About all these questions.
The hypothesis, my hypothesis, is that we will strive to learn in all areas. We won't favor one at the expense of the others. We won't be super strong in learning about product and UX questions, for example, and very, very bad at learning about production. That would be quite terrible. And we especially won't wait too long to learn.
I'm going to present some practices, a few practices, obviously we didn't invent them, you probably do some of them, you likely implement these practices as well. These are super interesting practices because they are necessary for executing a project and building a product, but we believe they are super interesting for learning because they meet the criteria we saw earlier. They allow us to learn very early, they allow us to learn about different areas and to learn very effectively.
The first practice is user testing. User testing, if I had to give a definition, roughly, I would say, we observe our users actually using your final product in near-real or real conditions. They are at their workstation and you barely intervene, you let them be, you observe them from a distance. So there you go, observe this user once a month, using your product. Who does, with this definition, who conducts user testing? I raise and lower my hand. Okay.
So, user testing.
We all have a lot of experience in our respective fields, so we all have intuitions about what will work and what won't. Half the time, these intuitions are good, but really, really, it's the tool, it's the practice that, from this point of view, gives us big slaps, big wake-up calls and makes us learn a lot. So the idea is not to... We all know those quotes from Ford and Steve Jobs. The idea is not to wait for the user to formulate the problem and the solutions for us, articulate all that, click, click, there you go, they sign it with their little user name, and we just have to implement it. No, no, obviously, it doesn't work like that. But user tests are super effective. To validate your hypotheses about what the problem is and to validate your hypotheses about the solution. In terms of cost and investment for the tests, it's extremely effective compared to many iterations. I'm getting worked up on Twitter, it's a bit strong, but I said on Twitter, I'm tired of hearing 'fail fast.' In the sense that we encourage a lot in our communities, you have to fail quickly, fail quickly. Failing is good, we learn. If we can learn without failing, that's even better. So user tests allow us to barely fail because we haven't necessarily started developing yet. We've developed beginnings of mock-ups or things with a very, very low investment.
So, 'fail fast' is surely a good slogan to make failure less frightening. But still, if we can succeed just as fast or just as well without crashing, all the better. In this regard, user tests are super effective. I don't know, it's a bit strong, but I was saying on Twitter, I'm tired of hearing 'fail fast,' in the sense that we encourage a lot, a lot in our communities, you have to fail quickly, fail quickly. Yes, failing is good, we learn. If we can learn without failing, that's even better. So user tests allow us to barely fail because we haven't necessarily started developing yet. We developed early mock-ups or things with a very, very low investment. So, 'fail fast' is probably a good slogan to make failure less frightening. But still, if we can succeed just as fast and just as well without crashing, all the better. In this regard, calculators are super efficient.
So the characteristics of testers that make it work for you, that make it succeed for you. First characteristic, it's that we need to do it very often and very easily, that it's inexpensive, that it's cheap. So in fact, we're going to do it ourselves. Culturally, in many web companies in France, we're quite old school and these are things we outsource. So there are companies, I don't remember the names, firms, to whom we outsource our testers. So we write a proposal, it takes a month to write the proposal, describe to them what we want to test, contract with the chosen provider, design the tests with them. So now we're at two months, they run the tests, one or two weeks, they analyze the test results. Now we're at three months, you see I tested two or three screens, and it took me three months. It's completely insane when I need to... For what I tested, I need it to take me a few hours, a few days, to do it all the time, every week, every month. So the first mistake is outsourcing this skill. It's still crazy that we imagine—I say crazy but we still do it, not entirely but we still do it—that we imagine outsourcing the understanding of user needs. So many marketing and product teams in particular are very afraid, saying, 'Oh, it's a complicated thing, we need to call in ergonomists, it's rocket science.' " And our friend Steve Krug, who wrote a cult first book on ergonomics called...
Does anyone remember Steve Krug's first book? Well, never mind. Yes, *Don't Make Me Think*. Don't make me think. I'm a user, I don't want to think when I use your interfaces. He wrote this book, *Rocket Surgery Made Easy*, to demystify and... And actually explain user testing simply. So there you go, do them yourselves, try. The first few times, you'll inevitably mess up, but honestly, there are plenty of guides, lots of little tips, lots of checklists in the Lean Startup literature, from Steve Krug, to help you run your first user tests fairly quickly and at low cost.
So that's an example of a small grid.
It was about the CMS, this is a test on the CMS, so we wanted to verify, you see a hypothesis, the user immediately found how to create an article, they were facing the interface, there were several possible menus and we wanted them to quickly, in a fairly obvious way, find the menu to create and start writing their article.
So there are instructions at the top, we have a script, so in the testers there's someone who is quite distant, who tries not to get too annoyed when the tester does whatever on their product, but who tells them, 'We'd like to test this, can you do it?' And then we have a script that takes lots of notes, so it will note if the objective was achieved, the number of clicks, the time spent, the number of errors, etc. Various remarks, it could be the user's swear words, for example.
So, some stories about testers in the world. This is more of a fail. So we developed this thing in 2013. It's a geolocation of content on a map. So it was pretty cool to design, to develop, with OpenStreetMap APIs, etc. We were super proud of ourselves. And we did site tests, but very, very late. We did them a few months ago, so more than a year after the product launch. And it turns out that users don't really understand what this product does. In fact, many didn't even notice it. It's in the premium features, you have to be subscribed. Many don't even notice it. It's a bit low on the premium homepage, but not that low. So already, okay, we have... Many invest in this thing, but they don't know it. And for those who do know it, many don't understand it. So there, we're shrinking even more. So really, this experience allows me to sell and explain the value of user testing internally, saying, remember the map we did a lot of work on, we were sure of ourselves, we said, yeah it's great, this thing is really great, it's bound to take off. And in reality, users aren't as enthusiastic as we are. Second anecdote about user tests, on the CMS, in every user test, something we thought was difficult, we said, here's the user, there were debates among us, but it's not going to work, they won't understand where to go. Every time, something we thought was difficult was obvious to the user. So we were completely wrong, our intuition was false. And every time, something we thought was obvious—that is, we didn't even question it, we didn't have a step in the test for that thing—the user got completely stuck and didn't get through at all. We had to tell them, 'It's like this, give me the mouse, I'll show you, let's move on to the next step.' With every user, we had the case, something obvious, not obvious, something. So there you go, big slaps and huge learning every time. And the second anecdote, we were testing, we had personas, you know, profiles to segment a bit, it's theater, so we had some geeky people in our profiles, because there are young journalists who do very visual things, who code, who develop. And then we have journalists who have passed... retirement age, because I think with journalists, they have the right to work until I don't know what age. So we were also testing that profile, which was a good extreme for us. And then we told them, the test was to test copy-paste, because, well, they do copy-pasting. Yes, journalists do copy-pasting, it's nothing special. And so we wanted to verify that they understood it worked with this tool. We told them, so here, you want to copy the content that's on the desktop. And then he says, on the desktop on the fourth floor. No, obviously, we were talking about content on the Windows desktop. So that's really something. When he told us that, we said OK. Our user, our journalist persona, thinks like that, has this kind of problem. And that gave us a good, good slap. So to summarize, do them, eat them, bring your developers if you can to attend this user test to drastically increase the empathy they can have with them.
Or sometimes they can take shortcuts on 'but that's obvious, it's a non-issue' and in fact for your user, it won't be a non-issue at all. On user tests, take 10 seconds and ask yourself, right now, what's the next step,
what can you do to try to go see your users, use your product in the coming days? Are you going to read *Rocket Surgery Made Easy*? Are you going to, if you have a consumer product, go to Starbucks to test it or on a TGV? What are you going to... Don't answer me, but ask yourself the question. What are you going to do? Okay, the second practice we implement, that we use as often as possible, is deploying as early as possible, and I was even going to say too early.
So we all know beta versions, Google popularized that. But what's more interesting than the beta version system is how maybe on sub-parts, on sub-uses, on a technical sub-part, on a sub-scope, how I can push something incomplete. So incomplete on many possible axes, many ways of saying it's not complete. And explore this idea of not complete and how incomplete things can go as far as possible, as fast as possible. Okay, so some little cooking recipes on tests, betas, beta versions.
First recipe, it's to fake, to rig a lot of things in the system.
So the learning is so huge that we're going to be ready to make an investment, and we do it, and it's always worth it, to do a double run, connect in your information systems, connect, have a temporary pipe to connect such a system with such a system, the old payroll system with the new payroll system, that's for bankers, for us it will be the old CMS with the new CMS. the old CMS with the new CMS. We do a lot. A lot of temporary, disposable developers who aren't very useful, except to have a beta version that kinda works on a sub-scope, for a sub-use, for a sub-population. But the learning is so huge that we make this investment. So we do a lot of hacking.
The second strategy, as I mentioned earlier with 'fail fast,' I don’t really like it, but I still do it. In the beta version, we put things in the users' hands that are quite broken, not necessarily functional, where not everything is there, and what is there, we inevitably go into production a bit too early—it doesn’t work completely. As a result, we have to provide a lot of support to our beta testers, which requires a lot of energy. So we have to recruit them carefully—both to represent different personas and profiles, but we also take people who are rather benevolent.
Because otherwise, after an hour of this, with something that’s only half-finished, the person will shut down like an oyster and say, 'Okay, you’re really wasting my time,' and they’ll go elsewhere. " So we take rather benevolent users—they’re not necessarily representative because in real life, we also have difficult people who aren’t all benevolent. We also take a few difficult ones to make it fairly representative, but mostly benevolent people who will be able to play along and work with us on this beta version. We spend a lot of time encouraging them. Okay, yeah, so that... And what do you think about it? So we gather their feedback constantly. We encourage them to use something that isn’t necessarily functional. We really listen to these people, we spend a lot of time with them because they save us a tremendous amount of time, even though we make them lose a little time in the process.
And we compensate them a little, meaning... In a Kanban, maybe they have—I’ve never visualized it this way—but they have some features that are for them, for the beta testers, to encourage them to keep using our broken beta version. So it’s not them who guide the product roadmap, it’s not them who lead—they’re not product owners. But we give them something fairly regularly.
We compensate them.
When we’re not able to deploy a functional scope that’s usable, a usable sub-module for end users, whether it’s
the general public or journalists, we strive to go into production as quickly as possible with sub-modules that are technical and testable by tech teams, developers, or ops. This teaches us a lot. In fact, all major production deployments are broken down, and technical sub-components go into production long before, long before, long before the functional sub-components. So we test a lot of things—we test stability, even something that isn’t heavily used can crash. We realize that even something unused, when deployed, can be a nightmare—we lack tooling, it’s not supervised. So even if it’s something invisible, we learn a lot by deploying early. So, what example can I give? All the products I showed you at the beginning are actually available in production weeks, if not months, before their actual use.
Okay, so weeks—I told you that, it’s done. The advantages of deploying very, very early, The first is to focus product managers, product owners—call them what you want— on real user problems.
These people have all kinds of ideas, good and bad, and once the product is really in the hands of users, the priority of what the real problem is for my user becomes obvious to everyone. Okay, we all thought it was this, no, not at all, it’s that. So putting things into production very quickly and regularly ensures that you’ll focus product managers on your real problems. Same with the tech teams—I’m thinking of ops or developers who always make plans. Engineers think, yeah, my problem when it scales will be Redis because there’s no master safe, I don’t know what. Or later, once we’re without a mutator, it’s this component that will fail. Or, oh no, we’re going to have an attack. We’re going to have a denial of service, hackers will attack us. In the real world, denial of service attacks—we’re the ones who cause them most of the time. It’s rarely hackers. Nine times out of ten, it’s us. And the problems we think about—let’s say half the time, we’re right, it’s the next problem, but half the time, we’re completely wrong. So this will also prevent you from wasting engineering time on, I don’t know, setting up a Redis cluster or a PostgreSQL cluster when it’s not your next problem at all. So it’s pretty great—deploying to production, encountering problems, and focusing the people responsible for solving these problems on the real issues.
This will also allow us to improve robustness, whether active or passive—I don’t remember exactly in relation to Don Reinersten’s keynote, but it will allow us to improve the system’s robustness very early on. We’ll put it into production, use it a little, a lot, passionately. And it will crash. And so, in the case of real problems, real production conditions, we’ll see that we have this problem and that we’ll need to implement this countermeasure to make it more robust. So this will allow us to gradually improve
our system, with real material for continuous improvement. In fact, all these problems, if we don’t deploy in beta, we’ll encounter them later, on the day of the production launch and in the following months. Here, we just start encountering them a little earlier. So continuous improvement of production problems starts much earlier. And what’s also great is that it engages people—it makes them more passionate, more excited about the project, because on projects that can last 6 months, a year, a year and a half, or even much longer, people are in production sooner. Okay, they’re not in production with all the features, sometimes they’re not even in production with real users, but there’s a little excitement about the big production launch, crashes—because we’re going to have crashes, we’re going to have big problems in the weeks that follow. This adds excitement and energy to a team.
It’s the big deal much earlier. The production release day is in the following months. Now, we just start encountering it a little earlier. So, continuous improvement of production problems starts much earlier.
And what's also great is that it engages people. It makes people more passionate, more excited about the project. Because on projects that can last 6 months or 1 year, a year and a half, sometimes much longer, People are in production rather, okay they're not in production with all the features, sometimes they're not even in production with real users, but there's a little excitement about the big production release, crashes, because we're going to have crashes, we're going to have big problems in the following weeks, that adds excitement and energy to a team.
It's the big deal much earlier.
Okay, a few stories about early production deployments. So, this is the subscriber homepage we launched in 2013.
So, this homepage, which you access if you're a subscriber, has some particular editorial features, the content isn't the same, the featured areas aren't the same, the refresh rates aren't the same. So, all this to say that journalists don't work on this editorial product the same way they work on the free homepage. When we did this, it raised a lot of questions. They didn't know if it was going to work, they didn't know if they could put this type of content in that block, they didn't know about journalist shifts, because now we update 24/7, how they were going to have to organize, if they were going to need three journalists between 8 AM and 2 PM, and many questions. So, we tried to address these questions before going into production. So, a few weeks before the production release of this thing, we had a pretty buggy version, I must say, that we put in the hands of journalists and on which they created this type of homepage that wasn't finished from a design perspective. But it allowed them to see a lot of things. We discovered other things after the production release, but for three weeks, a month, before the production release, they played with it for several hours a day, and they brought this subscriber homepage to life. And they realized, oh yeah, this thing doesn't work at all. Here, a photo like this, with this type of headline, no way, there's no way it works. And so we learned about these uses as early as possible.
Second story, about the CMS, as I was telling you, we've been doing this for a month and a half now with the CMS we developed.
And actually, the notebooks, the supplements have been done since June, but the CMS has been in production for ten months, for much longer. And we put into production a minimalist thing that allowed writing without enrichment and pushing to the old CMS. We somewhat forced users to say, 'Ah yes, you're going to use this thing, it's a bit annoying to use, it doesn't do everything you want and you have to juggle between the two tools, but you like us and then we'll give you chouquettes every week and you'll see, you'll help us improve the product as we go along.' Everything I was telling you earlier. And this thing was a lifesaver because we had a lot of problems. In fact, for 10 months, we've been solving problems. I'll illustrate this with a few post-mortems. We had crashes, big crashes, we had lost articles, lots and lots of problems that for some we would have had after the last-minute production release a month ago if we hadn't done these earlier deployments.
Okay, a few stories. So after each of these incidents, we do a post-mortem. Read Web Operations or Etsy's blog to structure your post-mortems. How do I gather the timeline of what happened? What are the problems? What were the problems? What were the... Not the root causes, but... never root cause, but the causes, what problems I encountered during resolution, etc. And so we did a post-mortem, for example...
So here, the production database is emptied. So this was a developer who launched his automated tests. Automated test, unit test that empties the database before putting in test data. Only, he had left the production database in the environment parameters. So, he emptied the production database by launching his automated tests. So, he got scolded, very quickly, we did the post-mortem. At that time, we had two or three users playing with the system in production. I don't know, we must have been doing maybe five or ten articles a day, not even. No, not even actually. So, it wasn't very serious. You see, we were embarrassed in front of two or three users and not 400 users. And then, I didn't put... I didn't jeopardize the appearance of this, because if that had happened to me, maybe I wouldn't be here to talk to you.
So that's one post-mortem story.
What else did we have? Servers that crashed one after another like flies, I don't remember so I won't tell you. The DNS servers, the DNS servers that crashed and no more DNS, the application didn't work at all. So post-mortem, the ops analyzed the thing because we had DNS servers that were handing off to each other. With relays when we have one. One goes down, the other takes over, etc. But this fallback mechanism wasn't working.
We weren't alerted, we discovered the problem very late. So it doesn't work, okay, why? I think it took us an hour or two before we got to 'it's the DNS that's not working.' So there, you see, it's a pretty obvious component that wasn't supervised, on which there was no alerting. You can say 'we did our job poorly.' I don't know how we got there. Nevertheless, following this post-mortem, Obviously, we were alerted when DNS problems occurred, and the fallback mechanism was reviewed and corrected to work. So lots of little stories like that, with crashes that happened to us earlier. And so the earlier we learned this, the better off we were. On the right, the image is very bad, but it's an example of a monitoring tool. Here, it's about errors. But many monitoring tools that we put into production and customized very early, months and months before the big project release. Typically, in a project's life, monitoring, alerting, and supervision tools are things that come late, very late, or even never. Here, we had real problems, so we really... Put in place countermeasures for these problems. And most of the monitoring tools, now that we've been in production for a month, we've added a tiny bit, but I don't know, 90% were present before the release, the real 1.0. So that's a huge learning experience. Incidentally, it also allows putting ops into the plan much earlier, which I was telling you about earlier in the loop. I'm not going to talk to you about DevOps, but ops who receive projects very late, who are integrated into projects very late, who have the impression of always being the last wheel of the carriage, here they are engaged much earlier, they are confronted with problems much earlier, they discuss, they collaborate with devs much earlier.
So that's great.
Okay, last practice.
Last practice, we talked about testers, about early deployments. Last practice, when faced with a choice, and this happens every day, we are faced with thousands of choices every day, choosing the path that maximizes learning is often a great decision. In our projects, you see smoke and flames, and on the left you hear wolves howling, so we have to choose between these two alternatives. It's never as peaceful as in the photo.
Nevertheless, the somewhat typical case is that I have the choice between two features, at least in recent years that's been it, I have the choice between two features, which one I develop. They are roughly the same cost, the developers tell me that roughly it's the same complexity.
For users, It's roughly the same return on investment, either in terms of audience or usage. It's roughly the same. However, one will teach me a lot. One will teach me a lot about theater usage, because I have a big unknown on that point. I don't know how it works. One will teach me a lot about technical complexity, because it will force me to delve into this search engine module that we don't know well, which is a bit obscure. Or one will force me to deploy on a new server technology that I don't master at all. All this to say that you often have an alternative that will teach you a lot, will teach you enormously. It's often a good choice to take that path, to take the path that maximizes learning. So, in the end, what have we learned? It rarely, if ever, works the way we thought it would. Even when we followed all the best practices, we did design studios. That, design studios, is when you put the user in a room and do very, very quick design sprints with them.
Theater tests, etc. Even if you do all that very well, like a good little school pupil, it never works the way we think it will. Deployments in production teach us far more than all other practices. By an order of magnitude, it teaches infinitely more. That's why I insisted on the practice of deploying a bit too quickly, a bit poorly, but very, very early in production.
That's really where we learn. And in the world, as I've shown you things that worked more or less, there are plenty of things that don't work, you must see that too. We're still very bad at user testing; we don't do it for all our projects. For some, we still outsource it; we take a long time to do it. Sometimes we do it well and quickly, but we still have a huge amount to learn in this area.
So, what can I tell you in conclusion?
It's useless to be—you remember, there are three areas, at least as far as I'm concerned. It doesn't help us to excel in learning in one area. To be super strong in one area and learn ineffectively in the other two areas. So, I'll draw a small parallel with the theory of constraints. Let's say that overall learning will be as effective as the learning of your weakest link. So if you have cross-functional responsibility,
try to maintain a good level of learning in product, development, and operations, and not focus on one area at the expense of the others. And here's what we've tried to implement and will continue to implement in the coming months: to favor practices that are good both for execution carried out at B, good for the project, but implicitly for learning quickly and well.