Jabe Bloom
Transcript
[00:00:10]
Ah, even a slideshow. Excellent.
[00:00:14]
Um, first of all, uh, thank you guys for having me at Flocon. Uh, I really appreciate it. Um, I think this is my third Flocon. It's my first remote one. Um, and today I, I kind of want to, I want to talk about um, thinking about flow, um from some new new perspectives, um, and trying to kind of, um, offer some different opportunities to rethink about how we think about flow or to challenge some of the ways that we think about flow. Um, and I want to talk a little bit about kind of turbulence and, and how how flow is, uh, both good and bad, I guess, is is what I, I'd say. So, um, the track we're on is called, uh, the new normal, and I wanted to kind of start by like pointing out like normal's a a weird, is a weird idea. Um, but it's it's actually a a critical part of, um, the way that flow um, kind kind of conceives the world. The the flow often thinks of the world, um, as a uh having a set of normals, right? So, if we go back to early statistical process control and some of the ideas that burst, um, ideas of flow, uh we can really see, um, that that the relationship between normalcy, or at least a standard, uh, distribution, um, and control were an important part of trying to understand how flow worked. Um, and and the idea of studying, um, a system in order to understand its flow, um, in order to increase its, uh, capabilities, in order to make sure that it's, uh, producing what we expect, um, specifically had these these ideas in it that there was a natural form of variation. Um, and that there was kind of a an unnatural or an a, um, a not necessary form of variation. Um, you know, we can hear echoes of this in ideas about software engineering when we talk about the difference between kind of, um, complexity and and and, um, and unneeded complexity or unnecessary complexity in software architectures. So, there's this idea that there's certain kinds of complexity that are natural or certain types of variation that are natural and certain types of variation that are not natural. Um, and what we get is this idea here, I think, um, that when we're trying to create a a system, a flow system, what we're trying to do is is keep the system within certain tolerances in order to enable us to have certain expectations of that system and therefore be able to do things, uh, appropriately. And we and we'll get back to some of that in a little while, the importance of it, I think. Um, and, you know, it's kind of important, I think, here to realize that when we talk about these ideas, when we talk about the ideas of standard deviation inside of statistical process control, we're talking about Bell curves, we're talking about, uh, the ideas of control limits and staying in control and and trying to ask questions about how that works.
[00:03:41]
Um, I I'm going to point out two terms here that I think are important, um, to understand when we kind of as we move forward in the discussion, um, that I think are critical and I think are very often misunderstood. Um, the first one is variation, and and when we talk about statistical process control, we're usually talking about variation. So, variation is the unintended creation of difference in output. Um, first of all, uh, you know, when we talk about statistical process control, we're only talking about things that can be accurately enumerated anyway. They have to be quantifiable, um, in order to, uh, make themselves accessible to statistical process control, but also, um, they are often only thought of as being, um, the outputs and inputs into other processes, right? Um, and so variation specifically has to do with reducing, or you know, kind of, uh, flow systems often are focused on reducing this, um, this, my camera goes off, I don't know why. Sorry about that. Um, uh, reducing this types of this type of kind of change or or difference. On the other hand, we have this idea of variety, and variety, um, especially in systems like Toyota, um, and other flow systems is not a bad thing. It's a good thing. Uh, variety is the intentional creation of difference in a particular outcome. So you can think of the relationship between these two things as being an important part of creating a flow system because what we're what we're intending to do is reduce variation in the system in order to increase its variety, its ability to respond to different needs. Again, in the Toyota system, that means you have one flow line instead of multiple flow lines, but you also then have multiple types of car being produced on each flow line. Um, so that you don't, uh, so you get that variety, that ability to kind of go into different marketplaces and things like this.
[00:05:53]
Um. So, what we're looking for is being in control, and we want the stable production of quality output. That's kind of what what the goal of a a flow system or or a traditional flow system might be. Um, and so then there's this kind of question about like, how do we, how do we relate normal and flow? How how do we understand that this kind of idea of normal is related to flow? Well, I think we should first talk about what flow is. Uh, I think it would be important to kind of have some idea of what flow is. But I I don't want to limit it to just kind of like traditional Toyota, uh, and uh, and uh, kind of a, uh, mechanistic or, uh, um, manufacturing versions of flow. I want to, I want to kind of try to talk more broadly about it. So, you know, Donella Donella Meadows, uh, simply says that flow is, um, is materials or information moving from one system to another system. That's what she says they are. Um, and, uh, when we think about that way, uh, what we're looking about, when we think about flow, we we have two things. We have a stock, a place where kind of material or information resides, and a flow, which is the movement of information or material from one place to another place. Um, and you can see that when you look at the Now Meadows stuff, um, uh, and we'll and we'll look more at that in a minute. The another version of flow, right, another kind of conception of flow that we might kind of talk about or think about would be this idea of being in a flow state, uh, which would be this idea of being, um, in control of oneself, in control of one's, uh, thoughts, um, and the ability to control those thoughts so that one isn't, um, in in relationship to challenges and skills so that one is either not bored or doesn't have too much anxiety. So that in a flow state is one in which the a person is continuously kind of applying their skill to increasing levels of challenge by balancing anxiety and boredom. We have this kind of idea of flow, um, and that's that's maybe a second idea. And then we have this other kind of idea of flow that also kind of, I think, is important to recognize, which is this kind of idea that flow has something to do with like a natural state of the world, um, that, uh, like water flows to the lowest point and that, uh, good systems, good systems that flow, are are fluid, they're flexible, and they're reactive, they're, they're, um, they're, uh, responsive to the environment, and that's kind of maybe what we, we think about here. And and you get this, um, idea of being empty-minded or, uh, formless where there's this idea in in here about, um, kind of, uh, having biases or having, um, uh, thoughts that are against, uh, nature's way, uh, is what what causes a lack of flow. Um, and so you need to become more like, uh, the natural state of the world and and flow like that. So that maybe three different ways of flowing. So let's let's let's look at at Danella Meadows' version of it more closely because it's more going to be more more aligned maybe with some of the traditional ways of thinking about it. So, we we can think of flow as this idea that there's different parts of the system, um, and again, you know, we could talk about those parts as being stocks, like places where, uh, resources reside, um, and then there's flows, which is the movement from one place to another. And and Danella Meadows points out here that it's not just physical flows, it's also information. Information flows from one point in the system to another point in the system. Um, and so we get this idea that it's information and material flowing in a system. And and, you know, she often draws these as stock and flow diagrams, uh, where the stock is maybe a box where something is held, and then the flow is often indicated by pipes, um, and with these these handles on them, these turny handles on them. And the turny handle, of course, is supposed to, I think, both indicate a sense of control. In other words, the idea of controlling a system, if we think of it in as a flow system, is, uh, to think of it as, um, as, um, opening or shutting, uh, the the volume of information or material moving through the system, using these kind of, um, these these handles. So, uh, often when we think of flow inside of like a, uh, manufacturing kind of metaphor, um, or a software engineering metaphor, we we don't think of like shutting flow down or controlling a system by slowing flow, although theory of constraints certainly thinks that way.
[00:11:12]
Um, we often think of it more as eliminating all of the valves and maximizing the flow through the system. Um, but so it's just, you know, important to kind of notice, we've got stocks, we've got flows, um, and we've got this idea that being in control is being in control of the movement of either materials or information through the system. And of course, if we look at, um, a traditional kind of value stream map, uh, we we see these things. We see, uh, exactly the same types of things. So, we can see that, um, there's a stock of information in the production control system, and the information flows from the production control system to the supplier. Um, we can also kind of see that there's a stock of material inside of process A and that there's a flow of material and information from process A to process B. So that we get this idea of of, uh, again, of movement of information and materials through a system as being what flow is about. Now, one of the things I'll point out is that, uh, these systems are often designed the way they are as control loops. So, you can see in this one that that if you follow the arrows, uh, the little black and white arrows and the gray arrows around the circle, you will see that, uh, what starts from the supplier, ends up in the customer, and then there's a feedback loop from the customer back to the supplier, and therefore there's a control loop, and therefore when we talk about being in control, and we talk about controlling the system, part of what we're talking about is being responsive to the information that we're getting from the customer as the supplier in this feedback loop or this control loop, and a closed loop system. So, uh, Don, uh, who's everybody's favorite flow person, um, likes to talk about this stuff as well, and and he points out that one of the ways to stay in control isn't simply to close the loop. It's not simply to make sure that there is a closure of the loop, but in fact, that there is kind of a regular, uh, cadence of, uh, information delivered to the system, uh, via small batches. So, what Don is doing now is saying that it's not simply a matter of, uh, going having a loop, it's actually removing cycle time from the loop, um, and creating smaller batches in order to have kind of, I like to think of it as like a closer sampling of reality, like the sample rate of your CD gets better because you're you're, uh, more frequently sampling from reality this way.
[00:14:00]
And so, you know, we can point out here again that we get these same ideas, um, where his argument, or Don's argument, is roughly that holding costs, in this case, uh, are are when things are in the in the stock, in the in the part of the system where they're not moving, where they're being held. And so there's a cost of having those there. In in the case of information, uh, holding information, uh, or withholding information from other parts of the system has a cost, um, in that the system doesn't respond to that information as well. There is an opposite version of that, which is the transactional cost, which is the cost to move the information from one part of the system to another. And so in this way the the total cost, the economic cost of a flow system, uh, according to Don, is based on this idea that, uh, if you reduce the transaction cost, you can shift the total cost to the left by by allowing you to more quickly, uh, exercise or move, uh, things out of the holding base, out of the stocks, right? You get the same thing, uh, when you look at, uh, at, uh, the diagram that Danella Meadows would have made, um, which is, you know, the holding, uh, area here is your inventory of cars, and then there's feedback loops, multiple feedback loops, and multiple opportunities for control, that would be the, again, the little, uh, pipes up there, um, and then the transaction cost is simply the cost of opening or closing, uh, those, um, those, uh, valves, uh, which doesn't happen for free, there there's a cost to opening and closing or modulating the cost of those valves. So, um, we already saw that that slide, so that's not very interesting. Um, so then we get this idea, um, I think that if we kind of look at this stuff, when we kind of think through it, what we have here is this idea, um, that, uh, that what we're trying to do in a flow system is both kind of increase, um, the rate of production of the system, by increasing kind of the rate of production of information, uh, the movement of information, the movement of materials, but doing that within control, within some sort of idea of stability. And this is a a very kind of engineering-ish view, I think, of of how a system works. Um, and the result of that is that we kind of have this idea that what we want to do with the system is kind of understand what the system should be doing, what the expectations are, um, and correct for the, um, deviation of that system in order to make sure that the system is producing what we expect. Um, I I'll point out again, here in Hollings, um, the the critical part of that idea is the idea that you have predictable external conditions. If you don't have predictable external conditions, um, these ideas about um, having a stable system, um, in particular become, uh, difficult to, um, think through the implications of. Um, I I I will say, uh, just so Don doesn't yell at me, I don't think that's the way Don thinks about it. I do think it's the way, um, some, uh, of these ideas end up interacting with each other, though. So, let's talk about some different ways of thinking about these ideas.
[00:17:44]
First, let's talk about like hierarchy and stability, um, and and what what how how hierarchy and stability are related. One of the things that we get, uh, when we talk a lot about flow thinking, um, is this idea that there's a linear, uh, movement of materials and information kind of at the edge of the system. Um, and you get ideas about the need for distributing, um, decision-making and things like this to the edge of the system in order to improve the performance of the system. Um, and so hierarchy tends to get, um, get, uh, kind of a good beating in this in these conversations. And I I want to, before I I kind of go on this rant, I want to make sure I'm clear. I I'm not specifically talking about, um, managerial hierarchies here. Um, I'm I'm talking about, uh, a different idea of hierarchy. Um, so, uh, let's start with kind of Alisha Giraro's, uh, quote here and then we'll see if we can move on from there. I'm super frustrated by my camera. Um, okay, and I'm also frustrated by my slides randomly moving by themselves.
[00:18:59]
Please go back.
[00:19:03]
Now I've lost control of my slides.
[00:19:09]
Why are you randomly moving?
[00:19:12]
Sorry guys, just one second. I apologize. an idea of hierarchy. Um, so, uh, let's start with kind of Alicia Jararo's, uh, quote here and then we'll see if we can move on from there.
[00:18:48]
I'm super frustrated by my camera. Um, okay, and I'm also frustrated by my slides randomly moving by themselves.
[00:18:58]
Please go back.
[00:19:02]
Now I've lost control of my slides.
[00:19:08]
Why are you randomly moving?
[00:19:12]
Sorry guys, just one second. I apologize.
[00:19:38]
Okay.
[00:19:53]
I'm just going to, uh, have to figure that out as we go, I guess. Um, all right, so, uh, Jararo wants to to say, uh, first of all, that, that complexity, the, the idea of complexity, uh, arises from, um, systems moving far from independence. Um, and so, again, this is a a counter-intuitive thing I think for a lot of, uh, people who are into kind of lean and, uh, and flow thinking and things like this. Um, who, people who a community that often really embraces the ideas of autonomy, um, and the importance of autonomy, in in this case, uh, Jararo is kind of specifically saying that complexity, the, the, um, the way that systems become complex is that they become interdependent, not that they become independent. And the nature of complexity is, uh, to have systems become dependent on themselves, um, or, or on their parts. Um, so, uh, in order to think about that, uh, one of the ways we can talk about it is this idea that we would call, um, um, uh, myology, or the study of, uh, wholes and parts. So, uh, the obvious ones, uh, inside of the studies, uh, of myology would be kind of part-whole relationships, which would be how a part is related to the whole. Um, uh, and often is simply called, I don't understand why it's doing this. Um, often these things would simply be called, um, uh, bottom up, bottom-up relations. Um, so, I'm super frustrated right now, guys. Sorry, just one second. No. Uh, so, uh, that we but the idea that, uh, systems are defined bottom-up. Whole part relationships are the idea that systems are defined from top down, uh, where that the the part is constrained by the whole, uh, where the part has meaning coming from the whole. Um, there's a, there's a final one which I call, um, whole part or to whole part. And, and the idea of this is is pairwise relationships. Uh, usually the way I try to say this is, um, I'm a person and I'm a whole person, I'm a whole system, I'm not a part. On the other, and I have relationships, uh, with other whole beings, other whole persons. And so that's a pairwise relationship. But of course, then also I am, uh, uh, an employee of Red Hat. So I am part of a system. Um, but I'm still a whole part. So we have pairwise relationships too. So we have kind of hierarchical relationships both from top down and bottom up, and also pairwise relationship, which are these kind of lateral relationships. Now, I just, I can't take it. Now, I still can't control my screen. Okay. So then we have this idea, uh, from Alicia, that says that, um, that a system organizes itself in a particular kind of way if we take a complexity theory view of it. And it's a kind of a three-step process. So, instead of thinking of a system as doing bottom up and top down, I am super frustrated by this right now. Instead of having a system that does, uh, bottom up or top down relationships, uh, what we, what we can see is that there's actually kind of a sequence or a way in which a system unfolds. And the idea is this, is that you get some parts, uh, things in the world, random things in the world, uh, that are assembled in some way, or configured in some way. Um, and because that is the parts becoming a whole, we would call it, uh, bottom-up, um, kind of, uh, way of coming together, or a way of becoming a system. Now, when a system comes together, it becomes a new whole based on this kind of assemblage of parts. One of the things that it has to do is be exposed to an environment and determined whether or not it survives in that environment. Um, if it survives in that environment, then it becomes a whole. Um, and what I mean by that whole is that that idea of whole is kind of like comes a sustained whole. And to the extent that it becomes a sustained whole, then that whole, which is now being used by other systems in the environment, being consumed or pressured by other systems in the environment, then, then starts causing top-down pressure to occur on the parts. So the idea here is roughly, originally the parts could have been doing anything in any relationship they want. They come together into a relationship. They have environmental pressure put on them. When the parts as a whole functions inside the environment, then the whole all of a sudden actually starts creating downward pressure, and it stabilizes not only the relationship between the parts but the parts themselves. So you get a stabilizing effect when this happens. It's the creation of a system. And so what we get is kind of whole, uh, to part or top-down constraints, um, that are restricted. There no the systems are no longer sorry, the parts are no longer independent, isolated articles, uh, isolated particles, they become part of a system. And in order for the system to reproduce itself and in order for it to, um, take, uh, kind of be stable, those parts have to remain relatively stable themselves. Doesn't mean they have to remain exactly the same, but they do have to stay relatively the same, and the relationships between them have to stay relatively the same. On the other hand, now that the system has created a new whole, that new whole is potentially the part of another system. So that causes the ability for the system that that whole is part of to have more possibilities, to have more, um, options in its future. So that means that the phase space of the system increases. So as new wholes are potentially new parts, those new parts create new phase spaces or new potential wholes. And therefore, what we can see is this idea of a kind of a a hierarchy. It's a layering of systems on top of each other, where each system, um, each subsystem becomes a part in a new supersystem. And that when that supersystem becomes effective, it leverages or puts downward pressure on the parts, uh, that make make it up, um, in order to stabilize it. So, in this way, stability doesn't pre-exist the whole. It is kind of created by the whole. Um, so the parts are not stable, um, until there is a reason for their configuration or relationships to become stable, um, and that is a different way I think of thinking through what a, um, a a system is, what how a system comes to be. Um, in particular, uh, you'll note, uh, there's, um, the, the, the way in which the system is stable is not the way in which it's designed. The way in which it's designed only enables kind of phase one, the parts become a whole. Um, then the testing and the actual existence of the whole is what actually stabilizes the system. So, um, kind of an important caveat there. So, when we look at this stuff, one of the things that we can say is that, um, that we're looking now at a different way of talking through or thinking through how systems work, um, and why they work. And, and one of the things that we can say is that, uh, there's probably two aspects to the system, right? There's a social aspect of the system and there's a technical aspect of the system. So kind of Hollings's version earlier, um, where he talked about kind of an engineering view of kind of how you look at a system and how you correct it and how you try to stabilize it, um, is contrasted here by saying that that that's not the only aspect of the system that we need to pay attention to. We need to also pay attention to kind of what we call the socio, uh, technical aspects of the system. So what we can say here is that we have, um, a system, a flow system, uh, that is created by, um, two bottom-up parts, uh, a subsystem that is a technical subsystem, a subsystem of things, and another subsystem of people or organizational structures, etc. And that when we combine these things correctly, uh, we create a greater phase space, as we described, um, that defines, uh, what the system is capable of doing, what is what's possible for the system to do. But of course, when we create that system, um, and we test it in an environment, one of the things that happens is that the new system puts downward pressure on the technical and social aspects of the system to try to prevent them from changing. Um, and so this is one of the reasons why change inside organizations is so incredibly painful, because the way the system stabilizes itself, um, is by stabilizing the parts and their relations, and it resists, uh, uh, moving away from those things. So, what we're doing here then, when we kind of think about systems this way, is that we're we're moving away from an HCI or an ergonomics view of, um, systems. So one of the ways to kind of think about that, in a less jargony way, is to say that a lot of kind of early Toyota, um, and manufacturing theory, and even, you know, kind of software engineering theory has to do with the individual's relationship to technology. So that the what what what we're looking at there is the way in which, uh, we need to optimize an an operator operating a machine. Um, and we can improve the machine, uh, by improving improving its ergonomics, or we can improve the the human by, uh, improving their skills, but the relationship between the two is an an individual to a particular type of machine. And we can even see this when we talk about things like, um, you know, being multi-skilled or or or having multi-skilled organizations that where individuals don't simply focus on one type of machine, but maybe multiple types of machines. Um, instead here what we're trying to start pointing at when we talk about this stuff and we talk about kind of this new, uh, potential way of thinking about flow thinking, is that there's social interactions, uh, in other words, the way in which individuals work in groups are mediated by technical, um, uh, machines. In other words, uh, I'm talking to you through a computer and through the slideware, which is super frustrating for me because it keeps on breaking, uh, and the mediation, uh, is not working as well as I want when that happens. Um, but that means that there's a social interaction here happening through technology, and that is not the same thing as trying to do, for instance, uh, Toyota Kada in order to understand how to make a machine produce, uh, screws within a particular tolerance, yeah? Because what's happening here is less about, uh, that the idea that materials and information are created at a particular workstation and then moved to another workstation, and more about the idea that the interaction between the workstations creates information by interacting. And that's a completely kind of different way of thinking about it. We're not moving information from one place to another now, we're creating information or creating, uh, material conditions by interacting with each other, um, through a social system. So, um, when we think about this, uh, we we really, uh, end up thinking through these ideas, uh, uh, of mediated, uh, boundary conditions, right? Uh, where the technology, uh, is the thing that is creating or enabling, um, or disabling the flow of a social interaction in a system. So, again, it's, uh, when we think through flow this way, what we're thinking about is the way in which tools, uh, material systems, technologies either enable or disable the flow of social capital through a system, not the flow of information or materials through a system. Um, I think other ideas that you could start playing with here are things like skills liquidity and other ideas about how flow is about the ability for the system, the social system to reshape itself around the technology as systems or environments change. So, um, when we think about this, I like to talk about all this stuff as being something I call socio-technical architecture. And really, one of the things to kind of really quickly observe here, um, is just this idea that we're not saying, or I'm not saying, that things like statistical process control and the reduction of variation is bad, right? Uh, variability is bad. The those are good things. Uh, it's important that a system be in control to the extent we want it to be or need it to be in control, in order to reproduce, uh, uh, the whole that we're working at. So, one of the ways of talking again about that kind of whole part interaction, that top-down interaction, is to say that, uh, for instance, um, in the United States, um, a lot of, uh, everybody in the United States, uh, consumes electricity, uh, in order to do almost everything now. The production of electricity at a very stable rate for an a very stable, uh, economic, uh, cost, that's an important part of our social whole. Um, it when we can't produce electricity at a stable rate, or we have high variability in the rate of production or the cost of production, other systems that are built on top of it would not function well. So, when we think about architecture, when we think about kind of architecting systems, one of the things we can say is that there's a certain set of load bearing requirements of the system. For instance, uh, if you build a bridge, uh, it's fine to build bridges in many different ways, and it's fine to build them for many different social, uh, reasons, or to enable different social interactions. Those are all fine things. On the other hand, you should be able to answer the question, can I drive a 10, uh, 10-ton truck across this bridge? That that's part of architecture. It's part of being an architect, right? And so, we have these ideas down here of, uh, you know, making, um, making things fail safe. We have things, um, that where we would think about the idea of risk here would be that we designed the wrong thing, and that we really quickly want to determine whether we designed the wrong thing. Um, and that these are often bound or constrained by quantifiable or physical, uh, laws, right? On the other hand, we have this other relationship that we need to do if we're doing socio-technical architecture that says that, um, we can, uh, at best, with physical architectures, create robust and reliable systems. Uh, we can use redundancy, uh, we can use analysis and things like this. But if we really want truly resilient systems, systems that can respond to change, um, and adapt to change, we need to interact with the social side of the system. And in this case, what we're worried about are things like cognitive engineering, can people understand the system that we're building, um, we aren't talking about risk at the design phase, but in the use phase, while being used is the system the system, uh, exposes us to risk. And, and therefore, we're trying to figure out ways of creating safe to fail systems. And it's the interaction between these two ways of thinking that I think, uh, produces in the future, uh, new ways to think about flow, new ways to think about architecture, new ways of organizing, um, systems and, um, and businesses. So, why is that, why is that important to think about socio-technical architecture? Well, one of the ways I think, um, we can think through, uh, the ideas of statistical process control, uh, is to notice that, um, even in statistical process control, the theory never actually is to create an absolute single, um, you know, uh, non-variable, uh, output of the system. It's to keep the system within a certain variability, um, and not within no variability, not within a single tolerant. tolerance. And if we, if we kind of look at it again, if we look at it subtly differently, um, one of the things we might say is that the reason why we allow for six sigmas is because all systems, all technical systems, at all times are failing. They're all failing a little bit, uh, not a lot often, but a little bit. And so they're always in a state of continuous partial failure. And the way in which systems appear to be satisfying the expectations that we have of them is because humans, uh, the social side of the socio-technical system, the humans are the people that re-establish flow or
[00:38:41]
systems and um and businesses. So, why is that, why is that important to think about social technical architecture? Well, one of the ways I think um we can think through uh the ideas of statistical process control, uh is to notice that um even in statistical process control, the theory never actually is to create an absolute single, um, you know, uh non-variable uh output of the system. It's to keep the system within a certain variability, um, and not within no variability, not within a single tolerant tolerance. And if we if we kind of look at it again, if we look at it subtly differently, um, one of the things we might say is that the reason why we allow for six sigmas is because all systems, all technical systems at all times are failing. They're all failing a little bit, uh, not a lot, often, but a little bit. And so they're always in a state of continuous partial failure. And the way in which systems appear to be satisfying the expectations that we have of them is because humans, uh the social side of the socio technical systems, the humans are the people that reestablish flow or reestablish the system's expected results. And so what we have here is an idea that we would call skillful coping. And skillful coping is not um not a way of thinking about the world in which one has skill because one knows how to do something in specific. Skillful coping is the idea that the system's always kind of wiggling around underneath us in relatively unexpected ways, the technical system, and what we are doing is reconfiguring, re-establishing relationships, recreating the system constantly. Um, and we do that, we skillfully cope with the system's inability to reproduce itself by itself. Uh, I like to say, um, if you don't believe me, uh, try uh, well you got probably not everyone's old enough for this joke anymore. But how often did you have to reset your VCR timer when you were a kid? Uh, all the time because the VCR can't operate by itself, it can't even operate a clock by itself. So, skillful coping is this ability to kind of make the system do those things. And there's some weird parts about skillful coping, I think, when we think about it from a flow perspective or from a way of thinking about, um, how to improve flow in organizations, because coping can be both good, like we well, as we cope with things, we keep the system stable and bad. Uh, if we, if we use our ability to skillfully cope, um, on trivial things, we basically consume some of our uh skillful coping budget, yeah, our our ability, our um, our ability to consume um, uh new challenges. So, one of the ways to kind of think through this, I think, is to note that, um, when we're talking about flow and organizations, we we don't just need to observe um the immediate uh results of uh this step. This step produces this waste, or this step produces this output which is consumed by this group. All those things are valid and important, but another way to think about it is when we produce this step, um, are we consuming uh human, the human the social aspect of the system in in a way that could limit our ability to respond to other possible um outcomes in the future? So, uh, this is weird way to talk about this because, um, just sounds strange to people, I think sometimes, but it's important. Um, I'm not talking about making the system more efficient. I'm talking about the way in which if we reduce things like toil in a system, we are not just making the system more efficient, we're actually making the system more capable of responding to greater change. So, uh, by by kind of removing or trying to eliminate toil, which would be kind of uh, human beings doing the same tasks repetitively, the same low value tasks repetitively over time, by reducing that, by improving the system, um, we don't just potentially save money, um, or make the system produce better outcomes, but we also increase the chances of being able to respond in the future to um, to unexpected uh outcomes.
[00:43:52]
So, all this stuff is clearly something about complexity, um, and I think it's really again important to notice something about complexity here. Um, and this is again from socio technical systems theory, um, and and and the important thing to notice here is the way, um, in which systems tend to move towards certain types of complexity, um, and they and they don't have a lot of options about it. Um, the the the short version of this is called Ashby's Law, and it basically says, as systems want to address more and more complex environments, they they themselves have to be more and more complex. Um, in this case, I think there's this interesting way of kind of thinking through it, both at as an organization, but also as a team, or as a platform team, or as an organization, or a subsystem inside of an organization. And that's roughly to say that at at the smallest sizes, teams can be almost completely reactive. They can interact directly with the environment and all the information that they access from the environment can be shared with all the members of the team. Um, and the result of this is that um that that the teams will tend to um observe the environment as a place that is full of potential value and what their job is is to discover that value. So, this should sound something like a startup to everybody, um, because it is. That's how startups work, right? Um, as that value is discovered, there is kind of a new another level uh that the organization moves to. Uh, the organization often moves to, can move to. Um, and that is this idea that they discovered that value isn't actually randomly distributed in the environment, it's actually located in particular places, in in in in parts of the environment. And in this case, they kind of identify this idea that there's there's good hunting grounds and bad hunting grounds for the teams. And therefore they start being concerned about strategy because strategy is now about where to hunt and how to hunt in those particular places. So it's the application of the tactics that they learned, uh, when they were hunting around for value anywhere, to specific hunting grounds have specific tactics and therefore need specific strategies. So, we can see that the system is kind of growing in complexity a bit, um, and the environment itself isn't usually at this stage considered to be combative, it's still opportunistic. Now, at some point, uh, some other uh parts of the uh environment, some other players in the environment notice where you're going. They notice your hunting grounds and then they show up and start trying to hunt too. In this case, competitors show up and now it becomes important to for people to know things about hunting. Um, in your organizations because it becomes important to move that knowledge around quickly so that you can adapt to a new competitor. The environment isn't simply there for uh one to harvest, uh the environment now is pushing back against you, uh because there's other uh players in the game. And therefore operations emerges, the idea that you need to be able to control the deployment of for instance, teams into certain hunting grounds and withdraw them, things like this, right? And then finally, at the last stage, uh systems be um become very complex because what ends up happening is the competitors' interactions start changing the nature of the hunting grounds. In other words, uh, let's say the elk start moving to different places to hunt because of the pressures of the multiple competing uh groups trying to hunt on those same hunting grounds. As a result of this, um, what ends up needing to happen is that the uh the organization needs to move from a footing in which they consider all other people all other players on the board competitors to uh strategic partnerships, right? So they actually need to create shared purposes. Uh, so it can't be simply that uh one team or another team wins, it has to be that the teams have to share vision and they have to kind of pursue common goals. Now, I think the way I described it was uh kind of multiple metaphors around uh individual uh it's kind of tribal hunting or or groups hunting, but also startups blah blah blah. I think you can think of this very much as well as ways in which the complexity of subsystems inside of large organizations grows as well. Where they move from uh their initial ability to uh create success simply by finding value and and showing the value, to uh repeatable value, that's the kind of hunting ground metaphor, to the need to be able to operate that value in particular operating it in relationship to other offerings, so that there's a competitive operational um version of the system where my operations, my flow system flows better than yours, so you should pay me. To to the final stage where the system becomes so complex that you can't operate it without this kind of shared purpose. So that's that's uh that's one way of kind of thinking through how complexity emerges inside organizations and how the nature of information flows, where information flows from and to, changes as the system gets more and more complex. So, the other kind of thing that we can think of when we think about complexity in relationship to flow has to do with this idea of when and how stocks are released. So, often when we look at uh kind of most value stream diagrams, they're a line, um, if you look at stock flow diagrams, they're often uh have many loops in them. Um, but what we can see or what we can think about really briefly is this idea that um flow is normally thought of as trying to determine where in the system is there a constraint. And with with an idea that that constraint exists, it's something that you can point at. Um, so it could be a machine or it could be a process, or, you know, in the in the Phoenix Project it is Brent, it's a person. Um, there is this other way of thinking about constraints and flow, however, in a complex theory, which is that constraints can be emergent, and they can be emergent based on the interaction of the release of information from stocks of the release of information from stocks and they their combination, right? And so one of the simplest ways of kind of thinking about this when you think about flow um in relationship to this is uh the reason why we do continuous integration is because the longer we wait to integrate a system, the the greater the holding cost is of the information in the subsystem that is not integrated. So to the extent that we have multiple subsystems that have had a longer time period uh uh between their integration, what we're doing is we're releasing a large amount of information into the system all at once by doing the integration. And the systems uh tend to not be designed well to absorb this amount of information. It does they don't uh they they're used to kind of consuming information at a regular cadence and all of sudden you're going to get a massive amount of information all at once. Um, and therefore you can kind of have this idea that says that um when we the longer we delay the interactions between systems that eventually will need to interact, the more likely we are to produce surprising information that the system will not be able to consume. And you get three different costs there, you get the the the cost of the risk of the integration, um, the cost of doing the integration, like we have to do more changes. And the duration of the uh um the fixes, so, uh the the longer we haven't talked to each other, the more we'll need to talk to each other to understand each other. So you get this triple interaction, um, where uh where it's not uh clear how it will play out. Um, I think that's a that's a a simple version of this idea that's that we can think about when we think about flow, which is this idea of emergent constraints. This idea that by again, by doing things like reducing toil and focusing on minimizing um unnecessary um uh interactions uh with um with systems, specifically the social interactions with uh technical systems, we increase the capacity for absorbing new information during interactions. And again, uh, the thing that I'm trying to point out here repeatedly is this is not the same thing as saying I add value during my process to the system by creating information and then handing it downstream to someone else. Because what I'm saying is specifically that it is the handing of my thing to someone else that creates information, and therefore in high flow systems, one of the things we would expect is the ability to ingest generated information upstream, not just consume pre-generated information downstream.
[00:54:18]
I think I might be able to click my slides. Okay. So, all of this is kind of trying to point towards this idea of um that Fred Emery would call an open system or the ability for a system to interact with its environment, um, and the idea that that system by interacting with its environment, um, will will ideally spontaneously reorganize and the reorganization will be towards greater heterogeneity and complexity. In other words, the system will get not simpler, but more complex, um, and it will achieve a steady state, a a a stability. That's that top-down interaction that we talked about, um, at whatever level, um, the system is capable of continuing working at. So, one of the ways to kind of think about this quote really quickly is that systems that have a hard time doing that, um, uh don't ever achieve their maximum potential because they're consuming most of their time. Uh dealing with uh remedial tasks. So, as a result of this, uh we we need to uh look at the way in which uh transformations occur inside organizations by understanding that simply adopting new technology, uh without considering the social impacts of the way in which that technology will mediate social interactions is problematic.
[00:55:47]
Um, so, uh when we when we think about this, uh what we're doing is we're trying to reconceive of systems uh with complexity theory as not systems that can be transformed, but systems that are constantly undergoing transitions from one state to another, um, and therefore what we see inside of a system that's conceived of as a complex adaptive system is that there is actually no stable state to achieve, um, the interactions are continuous, and therefore what we should optimize for when we think about flow is the the the consumption of new information in the system as as much as we can. Um, and this is uh also to point out that when we think of systems with this complexity lens, what we end up having to kind of grapple with is the idea that reduction in cycle time, um, isn't the only uh is bounded itself. If we go too fast, we we have we cause problems. And this is where we get to the title of my talk and then I'm going to stop talking. So, when we see systems try to perform above their ability to kind of ingest uh change and new information that's generated via interactions, we get what we what uh what in sociotechnical systems theory is called a vortical organizational environment. And we see three distinct things that happen in organizations that have this occur to them: dogmatism, polarization, and stalemate. So, if you see these interactions, these types of interactions, dogmatism being people saying that they there's only one way to do it and the way to do it is my way. Uh, polarization, the inability to uh interact with others and a kind of a disintegration of the organization. And stalemate where because people now have completely different ways of seeing the world, they can no longer uh negotiate effectively. Uh when we see these things, what we're seeing is an organization that's being overwhelmed by the amount of complexity that it's ingesting. Um, and the normal reaction to that, by the way, I think, is to try to reduce the complexity of the system as opposed to making the system more capable of handling complexity. I think those are different things. And so you can see this in ideas like um uh um uh how how buildings learn um where there's different parts of the organization that move at different speeds and turbulence in organizations is created when one layer of the organization uh moves significantly faster than other layers of the organization. For those of you who've done agile, this is the kind of traditional immune response problem, uh one one part of the organization accelerates quickly away from the rest of the organization and then the rest of the organization tries to kill it off. Uh because there's tension between these two systems now. And so what we can see here is this idea, um, I'm going to go to this slide first.
[00:59:11]
that there's two kind of forces that happen inside of an organization when an organization tries to ingest too much complexity too quickly. One is disintegration, it is the way in which the organization, organizational parts in in kind of a traditional value stream view, can't talk to each other anymore. They don't see or understand each other well anymore. Um, and the second one is delamination, which is the way in which the organizational hierarchy comes apart. Um, this is often just seen as, you know, organizations where the teams executing don't understand the strategy. They're delaminated from the strategy, they they the system is becoming apart, the layers are coming apart. So, uh, three things that I think you guys should think about when you when you leave here. Uh when we talk about new normal, we shouldn't talk about new normal, we should talk about new normals. Um, in an ecological theory, in in in um in the the way in which uh um uh we we we should think about things is not in this engineering metaphor but in this ecological metaphor about the interactions between parts. In which case, we're not looking for equilibrium, we're looking for equilibria, there will be multiple new stabilities. And to the extent that we try uh to uh kind of eliminate that variation as opposed to the variability, um, we will we will cause the system to become brittle again. So the ideal new normals will not be about a new universal view of the world, but a new pluralversal view of the world in which there are multiple um uh equilibrium, not single equilibrium. Um, the second one is to recognize that um when we think about systems, um, and we think about how they are getting more complex, we need to kind of remember that there are um different levels uh that they need to be able to deal at. And that is not one versus the other, but there's an interrelationship between these three different ways of being, including the ability to stably reproduce basic functions, the ability to uh reproduce change and um and and effi efficiently reproduce change, so the the way in which we change components in the organization. And finally, this idea that organizations that will be the most successful in new normal will get to this dynamic open-ended transformation where they will focus on transitions um as much as possible. And finally, just because, um, I'm about to go vote tomorrow and I'm worried about it. Um, I like I like to quote um Karen Barad.
[01:02:11]
Uh the interactions uh between your your your yourself and others, between your teams and others, between your organizations and others, that that is um how worlds and meaning arise. Um existence is not for individuals, it is a um group sport. Thank you uh for listening to me and tolerating my awful slide experience. Thanks.