Product Management for Continuous Delivery
Duration: 40 min
Views: 715
12 likes
Published: January 6, 2020

Transcript

[00:00:00] And a few bright colors to get you energized this morning. Thank you so much for fighting your way into Flokhon this morning. Um, I want to talk about product management for continuous delivery, but before I do that, I absolutely have to tell you about my favorite museum. We are in Paris, but it is not Le Louvre. It is also not, uh, le musée des Arts et Métiers, which I visited yesterday, and it was incredible. If you haven't been, you really must go and sample the incredible history that is present there of technology. But the museum that I love the most, I first found out about on Twitter, when I saw a picture of this and somebody said, Does anybody know what this building in London is? And I thought, ooh, facts not opinions, hey. I can get behind that. But moreover, I thought there was something very, um, very funny in this, because, um, whoever wrote this clearly has opinions. So for all he's saying facts not opinions, he's got views. And, um, so I went and I discovered a bit more about this museum and found that it was the atelier of a man called David Carkadi, who is a Victorian superhero. Unbelievable man that you've never heard of before. And I will defend this position that he is a superhero, thus. Carkadi stopped bridges falling down. That's superhero behavior, yes. So, for example, in the era that Carkadi was working, bridge disasters were incredibly common. Here is one where the Great Yarmouth Bridge fell down, 70 children died. But this kind of thing happened year after year, there were failures of technology like this. Carkadi stopped trains derailing. So, for example, the Versailles rail disaster in 1840 was one of six rail disasters, catastrophic events. And this one was caused by the failure of an axle, split. And Carkadi did this by testing materials and components. Testing materials and components. So at this era, uh, there were lots of new materials being developed. There were new types of iron, new types of steel. And people were throwing them straight into industrial uses pretty much. I mean, they tried to test, but weren't very systematic about it. And so he brought mature systematic testing to these new materials and components. And this caused a mind shift in the way people approached technology of the era. It used to be that when something went wrong, you would get some experts who would testify, well, I think it was because of these things. And some other expert would say, Well, I think it was because of this. And eventually they would blame a drunken foreman or something like that. Somebody down in the, you know, low status person and all go off and continue to do the same thing again, causing terrible safety and really poor infrastructure in that era. So when I finally made it into this studio, facts not opinions place, what I found in there was this. Was a machine called a universal testing machine. And it's a 15-meter long machine, beautifully precision engineered, which for its day being able to get the perfect spiral so that you could do this. Amazing engineering. It was steam-powered, so by the London underground steam network, super cool. Um, and, and so this machine is still in operation today in this museum, as well as a bunch of other testing, uh, machines there. Um, and it's open one day a month. So if you ever want to go and see this for yourself, you have to plan ahead, but it is absolutely worthwhile, and it's an incredible, uh, testament to the work of somebody who brought a maturity to a discipline that was not yet mature. And it's still visible, his work. He tested Blackfriars Bridge, he tested materials for a bunch of other things that are still in use today. And he could tell this way whether something had flaws within weeks, took that long to do this kind of testing rather than years. He could take a steel girder that was going into a bridge and test it. He could see what the pattern looked like when you bent something under incredible forces, so you could know why something was failing later on. You would know what that fracture pattern looked like. And this led to faster delivery, because you didn't put bad components into your engineering. It led to lower cost of maintenance, and it led to much better safety. Does that sound familiar to anybody? Maybe, possibly. So about continuous delivery, um, it's a lot of the same benefits that we were getting, but it's a very early precursor, you know, before the lean movement, before Agile, before any of that, 100 years before that, you had somebody starting to apply these principles in and getting some of these early benefits. And like continuous delivery is now delivering faster delivery, lower cost of maintenance, safety, reliability, all of these benefits we're getting from this technology. Um, just some level setting before we get into the product things, to make sure we're all talking about the same thing with continuous delivery. Um, it was codified initially by Jez Humble and Dave Farley in this book, Continuous Delivery, if you haven't read it, it is one of the true classics. And it's about getting useful software to users quickly, safely, and sustainably, in Jez's own words. And the central concept here is that of a development pipeline. So you have a group of cross-a cross-functional team who's working together now. So we've shifted as many skills as possible into that initial development team. And there are some principles embedded in here, like everything must go into version control. Everything then flows through integration, preferably trunk-based development. Everything goes through integration, builds artifacts, then get automatically propagated to downstream environments, giving a reliability because of that infrastructure that we didn't have in the bad old days. Um, and of the ability then to see very quickly if something wasn't going to actually work when it went potentially into production. So it has all these feedback loops built in. It's got that initial one with a developer on their machine. It's got a slightly larger one with continuous integration and some some more testing at that level. And then obviously in those downstream environments, we have more feedback coming in for those specific purposes. Now, it is not the same thing as continuous deployment. Just to clarify, this does not mean that all of the changes are flowing straight out to production with no human intervention. It means that everything is deployable, so it's possible to deploy to production, if that's what the business deems suitable. So that is continuous delivery.
[00:06:57] Um, and continuous delivery presents these incredible opportunities to product managers. And this was something that I, as a product manager, didn't initially see. I fought against this quite hard. It looked like it was more developer naval gazing, trying to do technology without, you know, not keeping an eye on the, the cost and the benefit that we were eventually going to get. This was in the day when people switched source control really frequently and, you know, I mean, obviously these developers just want to. I was completely wrong, completely wrong on that. And I look back and okay, hold my hands up. Um, it really did deliver on the speed to, speed of delivery and the quality improvements way beyond what the developers ever promised. And that meant that we had all kinds of positive knock-on effects, like we didn't have as much rework, we didn't have customers calling us up furious anymore. We had much better trust within the organization because the, the development was just better. It just went better. And again, product managers may fight against this. But now that we've adopted it, now that it's bedded into the technical community, it has got very solid technical practices. And product managers generally aren't, like I was, rejecting this anymore. Um, but we're still not embracing the benefits that it really gives us. And to look at why, I'd like to go back into the history a little bit of why it is that this is falling hard on the product mindset. And to do that, I want to do that with my help of my friend, Stephen Hawking, who wrote a sequel to a brief history of time, nobody knows about. Uh, the brief history of products from the Big Bang to black holes.
[00:08:46] And how many folks here remember what it was like to release yearly? Yeah, quite a few. How many folks here are still releasing yearly? Oh, only one. So back in the old days, for those of you who remember it, we've got to tell these young people what it was like. Um, we had these big bang releases once a year or so, and we tried because we only had one shot at this. We stuffed everything we could into the release. We jammed all the features, all the fixes in, and this led to really terrible quality, which meant that customers didn't want to deploy the release that we just released. They would want to test it for months or wait for the dot release or whatever, leading to all of this kind of low-value work, it didn't have to happen. So that's what it was like back in the old days. And we described what this value would be like with specifications that look something like this. And I, I don't know if you can actually read it up there, but it it a specification like this tells you exactly where to put the buttons, but in words. And we did this, and specs were really weaponized in those days. You would have somebody with a business case saying, well, I'm going to make 100 million dollars if you invest in mine. Well, I'm going to make 200 million dollars if you invest in mine. And it was all completely made up. I mean, we we tried to get professional about requirements engineering, but honestly, when I look back at those bad old days, it was the underpinning for delivering very, very low customer value. But then we got Agile. Hooray, we had a development team delivering software to the product owner, who was delivering backlogs back to the team. And it all, you know, the cycles shrunk, it got a bit better. But well, from the big bang to the black hole. The black hole part is we got more agile working methods within the team, but what happened when something went out to real users? That was pretty much it. It went out the door, and then we said bye-bye and never thought about it again. And this is what I would call black hole Agile. And unfortunately, we still experience it a lot of us today. Um, and so this is, this is, you know, pouring that stuff out that we do. And if you want to know if this is you, the signs of black hole Agile are things like done means coded or tested or released, but it doesn't mean validated. In this kind of world, with black hole Agile, the release is the celebration point. Woo, throwing it out into the black hole. But user success isn't the thing that's celebrated. You notice in black hole Agile that no feature ever fails. Never. No. Because you would never know it. It could go out there and never deliver on the business promise. But who would ever know because we don't have those validation mechanisms in place. And planning often starts with feature ideas. You say, okay, what's the list of things that everyone's got in their heads that we really think we should do? Who, does anybody recognize this as being how they, yeah. Yeah. Um, unfortunately, it is still extremely common for us all to work this way. And I think we can trace this behavior back to the days when we had those release specifications. And we we had that attitude of throwing things out to the users and forgetting about it. And we are still doing that with a lot of Agile development today. So, what lies beyond the black hole? Well, closing the loop. I mean, in that diagram for continuous delivery, we had things go out to the users, and this arrow just wasn't there. So what are we going to do? We're going to try to bring that information back from the users. We're going to add that business loop into our continuous delivery, so that we're not just doing development technical loops, but we're also doing production loops that are that are testing that business hypothesis in the same way that development is testing a technical hypothesis. And of course, it's easy. You describe the outcome you want. You ship some software and you measure it, right? Lol.
[00:13:13] Literally anyone who has tried this will know that it is not easy at all. It is really, really hard. In fact, um, Nicole Forsgren treated, tweeted just recently, she's the woman who wrote accelerate with Jez Humble, wonderful book, you should also read that. Um, and this was a along the meme of a quotetweet that everyone in your industry knows, but nobody's talking about. And she was right in there with measuring things is really important, but really hard, and none of us does it.
[00:13:46] Black hole Agile. That's what we see. That's what we're still living. And so you see a proliferation of objective setting frameworks, trying to help close the loop on Agile, trying to specify what those outcomes are that you're looking for and bake them back in, make sure that you're checking on that. Uh, most people will talk about objectives and key results. There are so many different flavors of this now. I almost can't, it almost isn't a thing. It's a whole family of things of ways of expressing intent and then checking on that intent. There are a bunch of other ones. I actually quite like the what I'm seeing. I haven't tried it, but I I like what I'm seeing in four disciplines of execution. Um, I like what I'm seeing in the performance measurement process, which is something that Stacy Barr, who is a performance measurement expert, she's a wonderful work, um, has done. The North Star framework just came out of amplitude, which is a company that produces software for tracking these outcomes. There are lots and lots of these frameworks coming around trying to help us solve these problems. Just to give you an example in case you've never actually worked with an OKR before. An example might be something like this. Like I say, there are lots of different flavors, but here is one example. And this is one that, um, I thought I would do my conference organizers a favor. I did a little bit of competitive research and found out from a very large product conference what they are using. I thought product conference, they must actually do some measurement, right? Turns out they did. Um, and they had broken down attendee satisfaction into some very simple categories: was registration smooth? How was speaker quality? And how good was the food? To be fair, I think that does really sum it up. Um, and so they had, uh, goals from year to year. So for one year, they had a very bad experience with people in long lines out the door, some of them missing the keynote. And so they had an objective in place to keep the user satisfaction, the attendee satisfaction high, um, and to particularly around the registration. And the way they were checking themselves on that was, could we get 90% of registrants into the hall before the first session? Um, they generally got an attendee rate of, uh, 97%, and they thought 90, 94 would be just about attainable to get those many people in the session. Um, and so what this does is this actually expresses clearly what the intent is and expresses exactly how they're going to judge themselves on it. Um, and one of the wonderful things about this is that it doesn't necessarily say how to do it. So there are lots of ways they could choose to approach this. They could add volunteers at registration to speed it up. They could move the keynote start time. They have lots and lots of options because they didn't say exactly what they were going to do. They all were aligned around that objective, which then even in the moment, they could work towards, which is get those attendees in before the keynote starts. So that's an example of, uh, OKRs about how they're used, about trying to align around intent, which brings us to the first point of product managing for continuous delivery is to align at the level of intent, not at the feature level, but the level of what you're trying to achieve. Now, to be fair, good product managers have been doing this the whole time. They're trying to make sure that whole teams and they are aligned on what is trying to achieve to get good input from those teams into what they're trying to do. So that isn't all, it's not just aligning on the level of intent, but it's also checking and changing course. If you're in that moment and you discover something, you're on a course that's not the best way to go towards that intent, you need the freedom to change it. You need to be doing that constant checking and changing course.
[00:17:31] This is a question I now, uh, coach product management into certain large organizations. Um, and we get, well, probably okay if we don't measure, right? It's okay. Um, and the answer to this is that it is okay until it isn't. So right now, I'm working for the United States government, and they, um, they have some software which is really bad. Like really, really, really, really, really, really bad. Um, I'm very much of the mind that if you want to evolve a system forwards, you start from where you are, and you strangle it, or you slice it, you do some things from where you are, you evolve it forwards. There is nothing to save from many of these systems. It's it's a total write-off. The business logic is flawed, the user experience is almost non-existent. The sea of workarounds around the software is is agonizing for anyone who cares about the experience of of users. Um, and so you say we're going to, maybe we'll try to put a a new authentication system in. You know, we're going to just try to use something which is more secure and uses modern workflows, modern development patterns to do it. And there is no doubt in our minds that this is going to deliver on the intent of better security and better user experience. None, none whatsoever. We would be absolutely astonished if it didn't actually deliver on this. And starting from where we are, with software that's that bad, honestly, you can be intuitive about what you're doing. And now you can pick that low hanging fruit. You can do the obvious things. And so measurement in those early stages doesn't actually deliver the value, and people start to think, well, this is pointless. Why am I measuring this for? We knew that. The problem is that at some point that will change, especially with continuous delivery teams. They're there, nice and iterative, delivering value really quickly. This will change. They will get through all of the things which are obvious and intuitive, and at that point, things stop working. They they stop actually being, you know, obviously delivering on the value, and they don't know what to do. The team doesn't have a mechanism for working in that environment. And what worked before doesn't work anymore. So, it's probably okay if we don't measure. No. No, it's not okay because at some point we have to have that with a continuous delivery team. when people start to think, well, this is pointless. Why am I measuring this for? We knew that. The problem is that at some point that will change, especially with continuous delivery teams. They're there nice and iterative, delivering value really quickly. This will change. They will get through all of the things which are obvious and intuitive and at that point, things stop working. They they stop actually being, you know, obviously delivering on the value. And they don't know what to do. The team doesn't have a mechanism for working in that environment. And what worked before doesn't work anymore. So, it's probably okay if we don't measure? No, no, it's not okay because at some point we have to have that with a continuous delivery team. So just to to give some evidence for this about it has to happen this always happens. Um this was a paper that Microsoft wrote which they decided to test whether their features were delivering on the value they were looking for from the features. And so they did a very broad test across capabilities being delivered through their applications and found that only 30% actually delivered on the business value that they were expecting. Only 30%. 60% either had no change or had a detrimental effect on the thing that they were monitoring. So this was after some years of trying to do the obvious things but not having a culture of measurement and suddenly they find themselves in a position where their best guesses are not good. 30% is not good. And it turns out this is a really common statistic. If you look at a bunch of other companies, that 20 to 30% figure is what they're all citing as being when you first start measuring. After you've done the really obvious stuff for a while and then you start measuring and you go, oh, only 20 to 30% of our capabilities are delivering on the business value that we thought they were going to. And at that point, these companies start to improve, they start to track their rate of success with these features. And at that point, they can start to understand what things do move the needle, what things don't move the needle. And it's that learning which is absolutely essential to successful product going beyond that stage. It should be intuitive to us though that value can go up as well as down when a feature is added. Like this actually when we step back to think about it, that's that's kind of obvious really because otherwise we would never end up with interfaces like this. We couldn't ever get there, but somebody thought each one of these capabilities, each one of these dialogues was valuable, but at some point it became a big mess people couldn't work with. And we may have seen that in some enterprise software that we all use today. We know that a feature doesn't necessarily add value. And this is one of my beefs with the word feature. It's so unmitigatedly positive. But it's not actually necessarily a positive thing. If only 30% of these things are delivering on the business intent, that's not a feature, that's simply some software you shipped out the door.
[00:22:20] we we do find businesses, probably your own are measuring plenty of things, so it's not for want of actually measuring. I think this situation is what I'm seeing a lot of. This quote actually came from somebody within Microsoft well after that report was published. We have a lot of metrics, but it isn't changing anything. Meaning that the measurement is happening, but it's not joining up to drive user behavior. and to drive behavior. And the difference between companies which simply measure and companies which really make use of the measurement is there in the culture. It's not in the tools they're using, it's not in the framework, it's there in the culture of measurement in those companies. So someone like Amazon has a deeply embedded culture of measurement and cascading and objectives and all of that. They have but that's everywhere in their behavior around product. So what do I mean by culture? I'm not talking about the opera and ballet, I'm talking about culture as a pattern of assumptions that's worked well enough to be considered valid. So, things that we did and there wasn't necessarily a basis for them, but they were fine. They worked. And so we continue using them. This man, Edgar Shine wrote some really great books on organizational culture and with this definition about those shared basic assumptions about what works well enough, we can consider measurement. And this whole it's probably okay if we don't measure thing is a cultural question. This is something which is bedded in not just to organizational culture, but the occupational culture of product management. And for me, I think this traces all the way back to those days of specifications when we just threw things over the wall and all of those product managers had successful careers. I had a successful career doing that at Siemens, did I make good software? No, but I still got raises and promotions and things like that. And so, in the culture sense, this was worked well enough to be considered valid. So this became bedded in product management culture. And if you look at the people who are doing the hiring, people who are doing the training, they may talk the talk on measurement. But I think we've still find that deep in our culture, it's not there. We are not walking the walk as well as talking the talk. So product wins when we extend that Devops culture, just like we're extending those feedback loops. And we need to add that validation culture in there, that measurement culture in, to bed that into our behaviors and not just our intentions. I'll talk a little bit about Devops culture. Um there are many different framings of this. I pulled this one off of Martin Fowler's blog. Many different ways of looking at it, but broadly we can think of Devops culture as a culture of feedback, a culture of automation, and a culture of building quality in at the start. And all of these things revolve around a sense of shared responsibility with these cross-functional teams, Dev and Ops. And so that's the team culture, which is built on a foundation of organizational culture. Which is no silos, breaking them down, and of autonomous teams, about trying to ensure that there is enough capability in those teams.
[00:25:40] that they can act without having to consult people from everywhere else in the organization, that they can get on and fulfill their mission. So that's the Devops culture, more or less, in a nutshell. If we want to add product in, well, you could argue that feedback is already up there. But that feedback isn't the same thing as that slice that goes all the way out to production users. So I'm going to I'm going to call that out separately as being a thing that we need to work on. is the validation part of product culture. And so, how do we change culture? How do we change culture? Well, luckily, our friend Mr. Shine has some advice for us on in this book on changing organizational culture. And he says that there are some primary mechanisms to embed culture. And the main one, the number one thing is what do you pay attention to? And this is especially from leadership, for people who are seen as leaders. What are they paying attention to? What are they talking about? What are they calling out? What are they spending their time on? Um some other mechanisms include reactions to big events. So that that in the moment thing uh is part of another way that we can embed culture in is how do those leaders or other people react.
[00:26:51] when big events happen. Um the way that resources are allocated, time, money. If we say we value uh team autonomy, are we actually putting resources behind making sure that can happen, including money often is needed in those cases to make sure that make that happen.
[00:27:09] And rewards, so the people who are getting bonuses, promotions, all of that, where is that coming from? If that should these are the ways that we signal. what we value in an organization. And finally, role modeling, teaching, coaching, all of those more traditional things, but things that people are can be too shy to do sometimes when it's a a value, not a hard skill. So these are the primary mechanisms. I've condensed this a bit from his list, but these are the primary mechanisms uh for embedding culture. And this is how I personally like to do things like that. I'm a big fan of dashboards. I'm a big fan of dashboards with something really big across the top reminding people what the objective is. Which is my primary complaint with a lot of project management software is you can't actually customize this to be your propaganda that you need. Um but I do like customizing a space, here's a team I worked with, they had their they don't look like they're using it, they did actually use this. Um their dashboards, there were several around floating around the office as well. So calling attention to that, making sure it's present and talking about that, making sure that if you're in your stand up, that you're referring back to that, that you're checking on it, that when you're planning, you're going back to that. So some of the things that again, I like to do to promote a close the loop kind of culture with teams is first of all, to make that objective and results really visible, all over, everywhere, sing it out, top to bottom. Um normalize working in small batches. I this almost could be a value in itself. This is so important. And I find that my teams have universally been too embarrassed to ship something really, really tiny, but that's the size that we need to be able to validate it. So promoting that culture and trying to normalize the great working in those small batches, getting all the way through the loop is what counts. That kind of uh reinforcement has been really necessary for my experience. Um making the time, this is about the resource allocation, just making the time. And as product people, we obviously have a lot of influence over how those teams spend their time, make sure that we have the time, those tasks to collect and analyze those results, ourselves, others, all over, that we are doing that work, that we are making that work visible, that we are including that in our planning. and acting on what we learn. This is where it tends to break down. So we can look at it all we want and this is the culture of measurement that people are saying but it doesn't really affect what we do. It's that acting on it and again, for me, this will vary by I think different things will work for different people. But for me, I like to start planning with wiping the backlog clean, I mean you can keep it there if you really want to, but starting from a fresh slate of what are you seeing right now in what you're measuring and what you've learned. and try to take that forward. So that's that's my personal techniques. I'm sure you've got lots more. I look forward to hearing them. Um And that's validation, that's that's what, you know, so an essential uh key component for us to take product management and continuous delivery forwards. And that gives us the opportunity to learn and to try to to improve our product practices. But we do have challenges with this as well. And I felt like I couldn't talk on this topic without calling out some of the challenges that make this really hard with teams who are getting into that fast release cadence with trying to product manage them using old style techniques.
[00:30:35] So once we add this feedback loop in, um I'll I'll simplify this diagram a little bit. Um we can look at these as being cycles in this. So taking out the clutter, it looks something like this. You have your inner development loops, I've removed some. And then you have an outer production loop. Which is that business hypothesis loop. Um but these ones are getting shorter and faster and more reliable and these ones are tightening up. And what has been my experience that this causes a bottleneck elsewhere. So it used to be that the bottleneck was delivery, back in the old days of shipping out software into the black hole. Um the bottleneck was delivery, especially, you know, back back before we had agile as well. Um, but suddenly we had a bottleneck in product decision-making. We had so many little micro decisions to get taken that a product manager couldn't be on hand to take them all, even if they were right there embedded in the team. In order to have the information to supply those decisions, that person could not be constantly on top of absolutely everything a development team was doing. And so we found that this caused a really big bottleneck elsewhere in that decision-making. Um I'll talk about an example, I no longer work for these people so I can speak both kindly and unkindly of them. Um and we experienced this when our development teams had bedded in these wonderful practices, but we also found that as they got really good at those things, we found that engagement started to actually decrease and we also found that, um, that effectiveness, decision-making, actually, it was like everything was really hard to get done in the organization.
[00:32:16] And we traced this back to product decision-making bottlenecks. Product managers were holding those decisions really close. and keeping control of those decisions. And it turned out the development teams were kind of ready to take some of them on, but those product managers were were holding them too close, they weren't letting them go. And we didn't have enough product managers either.
[00:32:38] So the whole thing again, led to this feeling that things just weren't right. Um and so despite these better development practices again, people were feeling discouraged and it didn't feel good. And so we tried a restructure of the company. Um and we tried to say we are creating some autonomous teams now, you are now empowered. Um and so, you know, the the next bit of advice is to empower the teams to avoid being bottlenecked on product decisions. I can tell you what it's like to do this naively and how to do this wrong. I consider myself something of a world expert in disempowerment well trying to empower. Um this is not just a a waffle word, empowering really means something specific. It's meaning that a person feels enabled or a team feels enabled to make these decisions. Um and in fact I've I've done this so badly, I've I've give a talk elsewhere on this about the A word autonomy and how I broke it. Um In short, the uh the breaking was by saying, you're autonomous, here's your backlog, good luck, goodbye. That is not the way to promote autonomy. You don't simply give control of something to somebody. Um this was solved by a deeper application of David Marquet's Turn the Ship Around. Um if you haven't read this, it's another wonderful uh book to understand these things. Um and from this, we can establish the principles of empowerment are somewhat different from what we thought. It's not simply about control, here's your control. That's not empowering. What is empowering has two pillars. I don't know if you can read these. I hope you can read these.
[00:34:15] Um one of these is technical competence and that's not just technical in the sense of uh you know, computers competence, but it's whatever techniques are needed for doing that and information for doing the particular role. And the other part is organizational clarity. So from the organization, you need clarity about what that team's mission is and what that team is trying to fulfill. So it needs a scope that's nice and clear and comfortable to work in. And it it that team needs skills and it was especially on that skills development that we really fell down initially. And what we found we needed to do was to to in order to give control was to train in product, we needed to train in these techniques of scope management, of ordering, timing, I'm not saying prioritization because in an objective based world that looks sort of different. But still, thinking about techniques, you know, understanding things like cost of delay, we needed to train our teams in some of those techniques to be able to make good product decisions.
[00:35:11] And finally, in thinking commercially, we needed them to to level up. And in fact, we found that many of them were ready. They were already asking questions which were indicating a readiness to move in this direction. But we needed to supply them with with information, with training, with uh what what was known in other parts of the business to very, I won't say aggressively, but to with deliberate deliberately put them into the team and supply the team with all of that context they needed.
[00:35:42] So if we go back to that Devops culture, that's the other piece that we found was really missing, the other piece that for a product culture, an empowered product, a successful product culture in the land of continuous delivery, we needed that empowerment piece to be in place as well.
[00:36:00] So this isn't easy. We've talked about it. I mean, culture change is never easy. Um but one of the key points about culture change is you can't solve it with technical practices. You need to recognize first that the problem is a culture problem. And so that's what I think we're doing. I think that's where our discipline has gone is to start to recognize the culture problems around validation and empowerment and recognize them for what they are. So, in fact, I'm sort of encouraged by this. So if we recognize the type of challenge it is, we can come together as a discipline in places like this and talk about the ways to solve it and start to make that progress in a way which is leveling up that whole playing field. Instead of just, you know, a technical practice here, a technical practice there and a company who does it well, we can take everyone forward by sharing those challenges. So I am encouraged, I am excited, I am excited by what I'm going to see today in the rest of this conference. Um and I'm excited about being able to bring facts and not opinions into these discussions. Obviously we still need our judgment, we still need our opinions, but we can really dial up the level of facts that we are using in that decision-making. And just to um to close out this conversation about what it's going to take to make this the norm in our discipline. Um I want to talk about what Kakoti actually experienced in trying to make testing the norm in material science. Um he found that for a long time, so when he first built that machine, he had to fund it through his own money. He had to fund that giant 15 meter machine because nobody but him thought it was important enough to do. And so he built it. He found he could get a few clients around the world to ship him materials. Primarily in Europe, I think the British didn't care as much about killing people. Um and so people actually, you know, he worked as an independent consultancy and over time, people started to talk the talk. And they it became a sign of quality, oh, my materials were tested by Kakoti. They still weren't doing it for themselves, but they were at least talking about, yeah, well, this is this is good. We we want to do this. Yeah. Um and then this persisted like this and he got enormously frustrated with this situation. where people were talking the talk again, but not really living that testing mentality. Um until suddenly towards the end of the 19th century, it flipped. And we looked around him and everybody was doing the testing, everybody, all the different companies had their own departments for doing this, there were competitive machines out there to his. It suddenly flipped. He found that like having all of these different people in all these different places, suddenly, it came together and the whole attitude of this industry changed. It felt like overnight, but in fact, it was a slow burn all the way through that that century. So I think that we're going to see that. I think that we're seeing that now as we're all agreeing, measurement is important, we're all agreeing, validation is important, we're all developing those tools and techniques. So we're looking for that shift that maybe we'll see, uh, maybe not this week. Maybe not this year, but hopefully soon enough for us to take advantage of that. Just to call out the books that I have based my thinking on in addition to experience, the things that have helped me frame my mental models around this. Um obviously continuous delivery, uh accelerate the book I mentioned Nicole Forgren's work, which explains how those mechanisms of continuous delivery work together with generating a better culture, a better environment, uh more uh developer satisfaction, all of those things. Um for goal setting. I particularly like radical focus. But I also like this book called Thinking in Bets, which is something that John Cutler has a lot of time for. I also have a lot of time for, a way of conceptualizing our work, especially the risky work as being a sequence of bets on what drives what. And what you can do to actually affect external behavior, which is often a laddering of assumptions and bets you're making along the way. I think it's enormously helpful for trying to actually take something from an intent and an objection objective level thing all the way down to what are we actually going to measure on the ground in a tight loop. And then finally around culture and empowerment, the Edgar Shine book I mentioned, which has lots of tips for uh for adjusting culture, for bringing culture forward in healthy directions. And David Marquet for Turn the Ship Around. Which is such an important tome on empowerment, on enabling teams to actually uh to to take up more of that in this in our case decision making around our products. Um I just want to say thank you again to the sponsors who've made it possible for us all to be here. And thank you for joining me today. Okay. Thank you.