Thierry De Pauw
Duration: 46 min
Views: 145
2 likes
Published: November 23, 2022

Transcript

[00:00:08] Okay, hello. Uh, I'm Thierry and oh darn, he speaks French, the idiot, and he's doing his presentation in English. Sorry.
[00:00:22] Right, there we go. Um, well, I'll start by doing what I do best. It's uh, being vulnerable. All the rest is just being an impostor.
[00:00:37] Um, so I'm bloody shy and introverted. All the good ingredients for speaking, isn't it? Right, yeah. Uh, actually I dislike public speaking. Yeah, why do you do it then? Well, I actually love having spoken and it's uh, it's a fantastic networking tool for introverts. And I need to say this, otherwise I don't feel comfortable. So, this is done, now we can start. Um, so I'm going to share the journey of 15 teams inside a Belgian federal agency that wanted to go from um, bi-annual releases to fortnightly releases in under four months.
[00:01:41] Um, all 15 teams were working on a single monolith. Um, all 15 teams consisted of software engineers, test engineers and analysts. On the side, there were a couple of um, transversal supporting teams involved like um, the architects, the DBAs, the uh, build and release engineers and infrastructure and operations engineers.
[00:02:15] Now, the monolith was written in Java using the EJB technology. Um, it was very old, like 15 years, very big, huge, um, deployed in a single JBoss application server, um, and using this single central big fat Oracle database. Classic, isn't it? Now, that monolith was quite important for the agency because it's the single piece of software that was running the whole business of the agency and also serving the 11 million Belgian citizens. So, quite a big thing.
[00:03:06] So, the agency had three to four planned major releases per year. Well, they used to have that. And those major releases were a very big deal. Whenever one came near, stress was rising. And then there was a code freeze for weeks during which you had rooms full of users in test sessions, um, doing regression testing and and then doing acceptance testing, making sure new functionality was working as expected, making sure no new regressions were introduced and yet, after each and every single major release, a series of urgent hotfixes had to be applied into production. Hmm. Now, not everyone was amused with these major releases in in the agency, especially the um, the domain experts. Um, the people sitting in between the business and IT. They wanted to see features released more often because they wanted faster feedback, they wanted to see how these features were used in production and then take new decisions upon that. And so, early 2017, a couple of people came together, um, to discuss the matter, how can we get rid of those major releases? Um, and they quickly realized, well, that wouldn't be too easy and that they needed management support.
[00:04:35] Uh, luckily management was quite favorable to the idea, but they wanted a business case before approving to work on on on this, on this problem. And so,
[00:04:45] people then started to estimate the effort of one major release and and the outcome was quite surprising.
[00:04:53] So every major release had a deployment lead time of 28 days. So that is the time from creating a binary and getting this binary deployed into production. And during those 28 days, 334 person days were consumed. Wow. So, with this the case was made and management approved the project to improve the release process.
[00:05:24] So, in September 2018, the agency got in touch with me with the question, yeah, can you help us go from bi-annual releases to fortnightly releases?
[00:05:36] And by the way, we would like that by the end of December.
[00:05:41] Right.
[00:05:43] So, you can imagine my surprise. So, in my humble career, I've never seen an organization um, that big going that quickly towards um, continuous delivery. But the good thing was they actually came to me and now I have a story to share.
[00:06:06] Now, this was going to be my very first experience with 15 teams. Quite, quite impressive. Um, before that I had an an experience with with seven teams distributed over Belgium and India. Um, I didn't really had a plan to handle that. I wasn't really happy with the outcome, although the organization seemed to be happy because they came back to me for another department. And before that, I've mainly worked with single teams. So, this time I wanted to have, well, something like a plan or more specifically, well, I wanted a structure that would help me to tackle this. And I didn't want to use maturity models as I've often seen used in digital transformations because they are fundamentally flawed. So they assume that um, learning and improvement is linear, context-free and repeatable amongst teams and organizations. Um, they focus on achieving a static level of technological and organizational changes and um, they uh, they just want to to to reach a a static level, well, a static level and and and and a static, well, called mature state, and then they call it done. And so, well, inevitably you end up with the classic digital transformation project, which has a start date and an upfront defined end date. Where we all know that the digital transformation is never actually finished. It's a thing that is continuously happening and so therefore we should focus on um, achieving capabilities and outcomes and adopt a continuous improvement paradigm. And so therefore I wanted to try out the improvement kata. Now, at the same time, my friend Steve Smith, which you all know now, um, was finishing his book, Measuring Continuous Delivery. And interestingly, Steve was also using the improvement kata, but he extended this with theory of constraints applied to the IT delivery process in order to find the bottlenecks and so to find the experiments most likely to succeed. Now, usually, when when I suggest this way of working to prospects, so using improvement kata, doing value stream mapping, um, and then applying theory of constraints, well, most of the time I get silence at the other side of the line and I never hear back from them. And and one day I had the reaction, but this is going to impact our whole IT delivery process. And then there was silence at my side like, what shall I say? Right. Um, but this time I was lucky. So my contact at the agency was the internal agile coach with a strong lean and Kanban background, which uh, quite liked the idea of running experiments and having data-driven decisions. Now, later I learned that IT management was not quite amused with the approach, well, because they want a roadmap, but I can't give a roadmap because, well, it's not because we've done something in one organization that we can just replay, repeat this in in the same, in this organization.
[00:09:39] Every organization is unique and has its unique constraints.
[00:09:45] So the first thing we did was um, setting up a core team who would lead the adoption of continuous delivery. Um, and the reason for that was that um, I wasn't going to arrive there with an army of coaches. So, I'm not McKenzie or Accenture. It was just me who was going to help them. Um, and so with the my limited availability and my limited time, it was impossible that I would be in contact with all 15 teams.
[00:10:17] So, therefore, I was going to be in contact with the core team and the core team would then be in contact with the 15 teams. Now, creating this core team was fairly simple. So the agile coach just sent out an email to the whole IT organization inviting everyone with enough background information, enough interest and enough motivation to participate. And so we ended up with um, 20 people representing um, the different roles from the 15 teams. So we had software engineers, we had test engineers, we had um, analysts, and we also had an architect, a build engineer, a release engineer. Unfortunately, we've never seen the DBAs and we've never seen the um, infrastructure and operations engineers, although there were quite some problems in that area too. But well, whatever.
[00:11:13] Now, investing in the practices that make continuous continuous delivery is very valuable. And this has been confirmed by the academic research done by Dr. Nicole Forsgren and by the book Accelerate also by Dr. Nicole Forsgren and friends. Where she found out that, well, by adopting continuous delivery, IT delivery performance will improve, and together with the adoption of lean product management and a generative organizational culture, well, there will be an improvement in the organizational performance. Like we speaking money wise, more turnover, generating more money. But you have to understand that continuous delivery is actually a holistic approach to reaching the right stability and the right speed for your IT services in order to satisfy business demand. And it is not only about speed as many tend to think. Well, it's first and foremost about quality and stability. So we need the quality and the stability in order to improve speed because it will prevent rework. But we also need the speed in order to improve stability and quality because, well, the speed will give us fast feedback which will allow us to improve quality and stability.
[00:12:38] Um, and now I'm a bit lost.
[00:12:45] Now, in order to improve speed and stability, we need to apply a large amount of technological and organizational changes to our organization. And we need to apply this to the unique constraints and the unique circumstances of the organization and this is what makes the adoption of continuous delivery so hard and so difficult. So it starts with a list of technological changes to adopt.
[00:13:18] Um, each of them are very valuable. Each of them are time consuming and quite difficult to adopt, but each of them are an enabler of continuous delivery. Now, together with these technological changes comes a whole bunch of organizational changes which are much more time consuming, much more challenging to adopt, but much more valuable than the technological changes. So let's pick one. Conway's Law alignment. Yesterday there was a talk about Conway's law.
[00:13:50] So, the DBAs and the build and release engineers were sitting in different departments than the 15 teams having different line managers. As a result, well, it creates the necessary communication overhead. In addition, only the DBAs were allowed to perform database changes. The 15 teams were not allowed to touch the database and the DBAs were not acting as coaches. So, from time to time, it happened that during a major release, a database change was not applied. Whoops.
[00:14:34] Well, remember the series of urgent hotfixes.
[00:14:38] Um, now, this problem never really got solved during those four months. Um, we had to wait for another crisis somewhere in March 2019 to get to a better solution, not yet how it should be, but already better.
[00:14:57] Um, where do we start? Do we start by improving stability or do we start by improving speed?
[00:15:07] Now, to be fair, this shouldn't be a question because well, if you are ever confronted with with um, should we improve speed or stability? Well, you should focus on stability. Speed will follow. Always.
[00:15:21] Now, but should we start by applying technology technological changes or should we start by applying organizational changes? Which change should we do first? Which one should we do next? Which one will work? Which one will not work in our context? Now, in order to move forward in these uncertain conditions, well, we need something that will help us with that. And this is where the improvement kata comes in. So the improvement kata is a continuous improvement framework for um, implementing and measuring organizational change. So it's it's a framework for reaching goals in uncertain conditions. It was described by Mike Rotter in his book, Toyota Katas, and it consists of four stages that are repeated in cycle. So we start by defining a direction, a vision, a goal, something that we want to achieve in a far, far future.
[00:16:17] It could be that we never achieve that, but the purpose is to align everyone on the direction so that it is clear where are we heading to.
[00:16:28] And we will iterate towards this direction using target conditions, so improvement iterations with a horizon going from one week to three months. So once the direction is set, we start by analyzing the current condition. How does the current process look like? Which data do we have? Um, and then we define our first target condition, so the first improvement that we want to achieve. It has a set date and a measurable target like an acceptance criteria. Now, during this planning phase, the team does not know how to move forward in the direction of the target condition. It's only during the execution phase that the team starts to run lots and lots of experiments with technological and organizational changes, using the PDCA cycle. So, we plan an experiment and expected outcome. We execute the experiment and collect the data. We analyze the result and compare it with the expected outcome. If it is successful, we incorporate the change into the baseline. If not, we discard it and we repeat. Now, for the agency, the kata looked like this. So, in order to satisfy business demand of the domain experts, well, the organization understood they had to adopt fortnightly releases. So, the direction was pretty clear. A last major release would happen end of December and from then on, all releases would happen every fortnight.
[00:18:08] To analyze um, the current condition, we were going to run um, a value stream mapping workshop to map the values the technology, the uh, technology uh, value stream. So that is all steps that are happening from committing code into version control and then getting this code into the hands of the users in production. Now, for the value stream mapping, we were going to use sticky notes and we were going to do this as a group with everyone involved in the whole end-to-end IT delivery process present in the room. Now, the benefit of sticky notes is that they are more fluid. It's easier to rearrange and so it's faster to iterate over the value stream map than using drawings. And when we do that as a group, well, what happens? Well, it starts messy and then it becomes messier and then it gets really, really messy. But as people are iterating and refining the value stream map, they are building on top of each other's knowledge. So you have to understand that an organization is a complex adaptive system where when everyone only has a limited amount of information, no one has a complete overview of the whole end-to-end IT release process. So in the end, we end up with a value stream map that integrates all the knowledge everyone in the room has about the release process.
[00:19:36] And so we ended up with this value stream map for the major release process that takes six months. So during five months, there is an accumulation of features.
[00:19:48] Then we have a three weeks code freeze. This is when we have rooms full of users in test sessions. This is where the 334 person days starts to be consumed. This is also where awareness starts to rise that quality is actually a thing. And then we have a one-week deployment.
[00:20:16] And when that is done, well, we have champagne. Yay.
[00:20:21] Obviously, we've just spent six months to get something into production. So, well, we might as well celebrate this.
[00:19:44] monces, there is an accumulation of features. Then we have a three weeks code freeze. This is when we have rooms full of users in test sessions. This is where the 334 person days starts to be consumed. This is also where awareness starts to arise that quality is actually a thing. And then we have a one week deployment.
[00:20:14] And when that is done, wow, we have champagne. Yay!
[00:20:21] Obviously, we've just spent six months to get something into production. So, well, we might as well celebrate this.
[00:20:30] But it also revealed a second value stream, a much faster patch release process designed for production fixes, which was happening every fortnight. And where occasionally, but increasingly, also features were deployed. Look at that. How convenient.
[00:20:54] So often when the transaction cost for releasing releasing features is too high, well, often a faster, um, truncated value stream will emerge for production fixes, resulting in dual value streams. So we have a feature value stream with lead times of months, and we have a fixed value stream with lead times of weeks. So although this is an anti-pattern, it was quite key to the success of this journey because this patch release process revealed the potential the organization had to adopt continuous delivery. So from then on, we only focused on this patch release process and just ditched the the major release process. We didn't ever looked at it anymore. Now, some people were confused by this decision and tried to get my attention back to the major release process by asking, yeah, but why aren't we trying to reduce the lead time of the major release process? Upon which I answered, well, look, you are already doing this. You are already releasing every fortnight. But you do this under the under the radar in a hidden way. Now we are going to make this visible and very transparent. And we may well make this patch release process more stable so that we can replace the major release process.
[00:22:18] Now, in order to identify the experiments and in order to identify the next or the first target condition or the first improvement that we wanted to reach, we were going to apply theory of constraints to the patch release process to identify the bottleneck. So theory of constraints is a management paradigm introduced by Eli Goldrat in his 1984 seminal book The Goal. So, if you haven't read it, I really advise you to read it. If you don't like reading books, there is also a comic version, which is actually very good. Now, the central premise of theory of constraints is that every system has one single governing bottleneck.
[00:23:08] And spending time improving anything than the bottleneck is just an illusion. So if we have a linear system consisting of steps A, B and C and B is the bottleneck, losing an hour on B is an hour lost for the whole system. Gaining an hour on A and C is just a mirage. Because well, if we increase throughput on A, it will only result in more inventory in front of B, and if we increase throughput on C, it will result in work starvation for C. Now, we can apply theory of constraints on our technology value stream because the technology value stream should be a homogeneous process wherein every step is deterministic, much like it is in manufacturing.
[00:23:55] And now by applying theory of constraints on our IT delivery process, well, we will be able to identify the bottleneck and so we will also be able to identify the experiments most likely to succeed. And so in order to identify the bottleneck, I've asked everyone in the room, pick a green sticky note and try to estimate the duration of every step. And then pick a red sticky note and try to estimate the failure rate of every step. Now, it doesn't need to be very precise. So most of the time, organizations don't have that information. So we just want to have an idea, an order of magnitude.
[00:24:36] Now, interestingly, the bottleneck was not the manual regression testing, nor the manual testing in pre-production, as most people would expect, including myself. Now, the bottleneck was actually the automated tests, the execution of the automated tests. Yeah, but wait. The execution of the automated tests only takes one to four hours, where the manual testing was taking half a day to a day. How can this be a bottleneck? Well,
[00:25:12] the automated test had a quite high failure rate, and we will come to that. resulting in lots of rework and re-execution of the automated tests, which adds up to the total lead time. And then in addition, each of the lanes that you see there represents a version control branch. So on each merge of a branch, well, the automated test had to be re-executed again, adding up to the total lead time.
[00:25:40] Now, up until then,
[00:25:44] each of these 15 teams were managing their the automated tests on their own. So they each had their own sets of automated tests. And they decided on an ad hoc basis whether to execute the tests or not, and which tests to execute or not. Well, mainly based on, yeah, which changes have been applied to to to the system, while I guess we can execute these and these and these tests. Um, yeah.
[00:26:15] So as a result, no one really had an overview of how many tests existed and how stable they were. So in order to increase visibility and transparency about those tests, I suggested as a first experiment, well, let's implement the deployment pipeline.
[00:26:34] Now, it wasn't really a deployment pipeline. Well, it didn't deploy into the different environments. It only executed automated tests. Because well, the deployment happened with another tool. Whatever. Um, now, the deployment pipeline is a key design pattern from continuous delivery. So it's the automated manifestation of getting code out of version control
[00:27:01] into the hands of the users in production. And the purpose is to increase visibility and transparency about the whole end-to-end release process, um, which will increase feedback and which will also increase the empowerment of teams. So from now on, on every change, all automated test would be executed. Now, I also suggested that they should collect some metrics about the deployment pipeline, the lead time, and the failure rate. So the lead time was eight hours,
[00:27:33] and the failure rate was 70%. Whoops.
[00:27:37] Yeah. Quite high. Now, the reason for the very high failure rate was quite diverse. Um, first, automated tests were sitting in a different version control repository from production, resulting in automated tests getting out of sync with with production code. The idea, or well, the concept that one failing test fails a release candidate was totally unknown to the organization. Up until then, they they applied something called test failure analysis. So whenever a test failed, they analyzed the cause. If it was if the test failed because of a recent change to functionality that was covered by the test, then, well, the test failure was accepted and the release candidate was discarded. But if no change was happened to the functionality covered by the test and the test was still failing, they were like, well, be okay, I guess.
[00:28:40] And so they accepted the release candidate. Yeah, what can possibly go wrong, right?
[00:28:49] Um, other reasons were, um, test data was not cleaned up before running automated tests. Um, lots of tests were depending on third-party services, which was sometimes up, sometimes down, resulting in failing tests.
[00:29:08] So this lead us to this current condition. So we have two value streams, a feature value stream, the major release process, and a fixed value stream, the patch release process. The lead time of the patch release process is eight hours, and the failure rate is 70%.
[00:29:33] the for the first target condition, so the first improvement, the team decided it would be one of stability, which is a fair choice. fair choice given that if you improve stability, speed will follow.
[00:29:48] And they wanted to reduce the failure rate from 70% to 30% in one month's time.
[00:29:57] And then they started identifying experiments to execute. So first was implementing a deployment pipeline, that was done, check. Um, they wanted to start to evaluate or analyze the failing tests on a daily basis and to start actually doing something about it. Um, they wanted to recreate the database before running the automated tests so that test data was cleaned up before running the automated tests. They wanted to have a separate environment for their automated test from the manual testing so that, well, there was less clash between manual and automated tests. They wanted to stop third-party services and then I suggested them, well, it would be also interesting if we could automatically collect metrics from lead time and and and failure rate so that we can start creating dashboards and see the trends. Are we are we improving or are we digressing?
[00:31:05] Now, that was for the plan. Now, the execution went slightly differently, um, and it took me another six months after they reached continuous delivery,
[00:31:19] to realize it was actually an example of, um, fear conversations. Well, and and actually it was Jeffrey Fredericks and Douglas Squirrel that, um, brought me that to to the attention.
[00:31:37] Euh, so the fear conversations, um, helped us to mitigate fear and have the difficult conversations we had to have in order to move forward. So, fear conversations is part of five types of conversations an organization should have in order to become a high-performing organization. It is described by Jeffrey Frederick and Douglas Squirrel in their book
[00:32:00] Agile conversations. It's, um, well, I can advise the book. It's really good. Now, in what follows, I will share three fears, but there were more fears. Um, And if you want to have the whole list of fears, well, it's it's in the book, Agile conversations.
[00:32:19] So the first fear was one of complexity. So when I arrived in September, the next major release cycle had started somewhere in June.
[00:32:31] So giving the given the organization two months time to accumulate features. Now, I wasn't really comfortable with the idea of having a big bang switch end of December between last major release and then going to fortnightly releases. I thought that would be too risky.
[00:32:51] And so I was hoping there was a way we could gradually move features planned for the major release towards the patch release process and do that in an incremental way and eventually getting rid of this last major release so that it just didn't happen anymore. Now, it took me quite some time to understand and and even more to accept that this would not be possible. And I remember we have been drawing and drawing on the whiteboard day trying to explain me why it cannot work.
[00:33:29] And this is what we ended up on the on the board. So the reason why it didn't work was a complex branching strategy that was in place. So they used something that resembled get flow. So they had a long running develop branch and a semi-long running master or well main branch. Yeah, now it gets interesting, does it? Um,
[00:33:53] some teams were committing directly into develop. Some teams were committing into feature branches. Other teams were using team branches and yet other teams were using combination of team and feature branches.
[00:34:06] Self-organizing. Autonomy, something like that. Ehm. Now, most branches were living while for weeks or months.
[00:34:20] Now, during every fortnightly release, well, during every patch release, so what happened was that, um, fixes and features planned for the patch release were cherry-picked from the develop branch into the main branch. Resulting in develop deviating more and more from the main branch and making it almost impossible to merge develop back into the main branch. So what happened during a major release? Well, deployment happened from develop, which was possible because develop included all the fixes and features already deployed with the patch release processes. Plus the features planned for the major release process. And they just deleted the main branch and recreated a new one and they start over again. So with a big sigh, especially from me, we accepted that to mitigate this complexity fear was to just accept that a last major release would happen end of December. In hindsight, I have to admit, it was less risky than I thought. Because, well, in in the run-up towards switching between major and and patch release process, we incrementally improved the patch release process and added and and made it more and more stable. And each time we added new improvements, well, it was implemented immediately and it was executed every fortnight. So it was enough tested by the time we would do the switch. Now, the team also wanted to implement Git, well, proper Git Flow in January after the last major release.
[00:36:07] Now, people knowing me know that I'm not a big fan of of branching. But given the context, it was fair to go for this implementation. Now, they also promised me they would work on on reducing the lifetime of their branches. That worked a bit less well.
[00:36:30] So six months later, they were still struggling with with having a stable deployment pipeline. Now, in the meantime, lead time evolved from eight hours to four hours, which is quite an improvement in feedback cycle. Because all of a sudden, they could execute their deployment pipeline twice a day instead of once a day. Number of automated tests increased, not because teams were frantically writing new tests. No, just because teams were discovering, oh, we have a bunch of tests here, we forgot about. Shall we add them in the deployment pipeline? Yes, do so, please.
[00:37:07] Um, but stability didn't improve at all. We had to wait till late November before having the first screen deployment pipeline where all tests were green, and then it was again red for another two weeks. Yeah, at that time, I was not really reassured that this would work.
[00:37:29] The second fear was one of deadlines. So being part of government, the agency had legally mandated deadlines and, well, not hitting those deadlines was just not an option. But the question arrived, well, but how will we how will we hit those hard targets knowing that a single failing deployment pipeline can block a release? And given the stability of the deployment pipeline, it was a fair fear.
[00:38:00] Now, to mitigate this, the core team came up with a manual overrule of the deployment pipeline. So, despite the failing pipeline, near a hard target, the team could decide to overrule the deployment pipeline and still accept the release candidate. And mitigate the risk by introducing sufficient manual testing. Now, I wasn't too amused with that choice because I feared it would remove pressure to actually do something about improving the automated test. And my fear was confirmed, but as I already said, six months later, they were still struggling with having a stable deployment pipeline.
[00:38:45] And the last fear was one of bugs, and this was the biggest concern of IT management. A single severe issue would land the agency on the front pages of all Belgian newspaper and damage their reputation. And again, given the the stability of automated test, it was a well-founded fear. So many of the tests were flaky, non-deterministic, re-executing a failing test in isolation often did pass it to green. Now, to mitigate this, the QA expert suggested,
[00:39:23] well, couldn't we split the automated tests in two sets of tests? We have a set of stable tests, which most of the time are green, and they are executed by the deployment pipeline all the time. And we have a set of, um, quarantined, unstable tests, and they are only executed at night. And whenever a quarantined test fails, it will never fail a release candidate. Whereas if a stable test fails, it fails the release candidate and the release candidate is discarded. Now, interestingly, the lead time reduced from four hours to two hours. So all of a sudden, they could run the deployment pipeline four times a day. Imagine the acceleration in feedback.
[00:40:18] Now, the last major release happened as planned, end of December, without major issues, which reassured IT management to move forward with the plan to adopt fortnightly releases. The first fortnightly release was planned in the third week of January. Everyone was holding its breath, expecting a storm of problems. And actually it went very smoothly, without much problem. It felt like a normal patch release, although the underlying process had changed dramatically. So by making the fears discussable and mitigating these fears, well, it introduced enough protection for the agency to move forward with fortnightly releases. And from then on, every release came out every fortnight, like clockwork, and it never stopped. And I remember when I came back after the first fortnightly release, well, I saw smiles all over the place. Everyone was super excited. They they couldn't imagine such a change was possible inside such an organization, and yet they did it.
[00:41:37] Are fortnightly releases considered continuous delivery?
[00:41:43] Well, continuous delivery has a dynamic success threshold. We say an organization is in a state of continuous delivery when it achieved the right stability and the right speed for its IT services to satisfy business demand. And so with the fortnightly releases, well, the demand of the domain experts was satisfied. So yes, they reached continuous delivery.
[00:42:13] However, I've also realized, and this was quite a surprise, that it's possible to reach continuous delivery without ever reaching continuous integration. Wow, although it is said in the community, continuous integration is a prerequisite to continuous delivery. So continuous integration has a static success threshold. We say an organization is in a state of continuous integration when everyone in the team or in the organization commits at least once a day into main line, every commit triggers an automated build and execution of automated tests, and, um, whenever the build fails, it's fixed within 10 minutes. Now, with the data we had about the agency, well, it was clear, they didn't reach continuous integration ever.
[00:43:06] Pff. Now, is this a good thing? Well, no. This is not a good thing. We will not be able to sustain continuous delivery in the long run. We will be running in a higher level of risk for delayed deployment and for production failures than people can stand. And this will result in a lot of stress, and a lot of fatigue and in burnout. And this will impact your organizational performance.
[00:43:43] Now, many times, when I was there, I was like, well, wondering, what am I actually doing here? This is not moving forward. This is going nowhere. Um, change was very slow. Experiments were not implemented. Stability didn't improve. I felt very little, very insignificant and and by times really, really exhausted. And yet, it happened.
[00:44:12] In the most unexpected way. And it happened, well, because this core team was a very yelled team, very motivated. They absolutely wanted to to achieve this. The internal Agile coach spent quite a lot of time in creating a safe environment. Um, the core team communicated a lot inside the organization about what was going to happen, what will change, what will be the consequences. And, um, also the the very short time frame, less than four months, while created a sense of urgency. And then most importantly, the core team just used the inertia of the organization at their advantage. They just decided and acted faster than the opposition could react.
[00:45:08] Now, the main, well, the biggest benefit of this whole change is that it created transparency and visibility. It uncovered all the problems everyone was aware of, but now it was visible. Um,
[00:45:27] and once dashboards were in place that showed the evolution of lead time and and and and stability, um, IT management got more and more interested in in in this and it unlocked more budget for tech coaching. So from my side, I've only spent 12 days of my time on this journey.
[00:45:51] Um, so two days of kickoff where we defined the improvement kata, did the value stream mapping workshop, applied theory of constraints, then defined the first target improvement and then defined all the experiments. And and this was then followed by a one-day follow-up, so one day per week follow-up by myself. And so my role was more one of guidance and explaining principles and practices. All the hard work, all the improvement work was was actually implemented by by the agency, not by me. So thank you. This was it.