Donald Reinertsen
Transcript
Bonjour, Monsieur, Mesdames.
So this is my third time speaking at the L'Incanban France. So you may think, oh, that there's a good sign there that the title is in French a little bit. So maybe I'm going to present in French, but c'est dommage.
C'est plus facile pour moi pour parler en anglais.
But we will talk about system day before the end of the presentation. So we'll have some French ideas, but not maybe some French language.
So what I want to talk about today is the topic of variability and how we look at variability and how the way we look at variability affects the way we behave. And one of the responses to variability is a belief that robustness is really an appropriate solution to the problem of variability. So I want to look at that and then talk a little bit about some other ways of thinking about it. So I'll start with, you know, probably the primary conception of a lot of people. And certainly, the more you learn about lean manufacturing, the more you will end up believing the statement on the screen, which is that all variability is bad, that the key to becoming successful is to eliminate variability in processes. And that is perhaps to me, it's the most toxic idea that is present in lean manufacturing for a product developer, the belief that variability is always bad. And because in the domain of product development, we are operating in very different economic terrain than the terrain that a manufacturer operates in, and our world is quite different. But let's say we assume that variability is bad, then certain behaviors will seem to make a lot of sense to us. And we will recognize that the world is in fact variable, that the world is stochastic, we have variability in outcomes. And our problem is that we have a negative tail. on that distribution of outcomes, and we're trying to protect ourselves from the tail of bad outcomes. And there are probably two very common strategies that we use to do that. One is that we explicitly make low variability choices. We say, How can I prevent surprises? I will just do things that are very easy to do. I will only undertake things that I can guarantee will be successful. And that's actually what happened at 3M when they implemented Six Sigma in their development processes. All of the innovation disappeared in 3M products. They went from having 60% of their revenue coming from new products down to having 20% of their revenue. Because the engineers were always making choices under the new set of incentives. They were making choices. If I can choose one path that has high variability and one that has low variability, they would always choose the low variability path. And there's not a lot of innovation found on the low variability path. The other approach we use, and that all of you would be familiar with, is I make a high variability. choice, but I hide the variability by inserting a buffer or margin in what I commit to. And let me just sort of show you these visually.
The first choice of making the tail small is really this one saying that if I can have a very narrow distribution, then the actual chance that something bad will happen will be quite small. But that means I don't undertake anything difficult. But it would be as if your child was in school and you said, I'm unhappy with your grades, and your child says, I can fix that problem. I will take the easiest courses in the school. And that will, in fact, fix that problem, but it's not going to give them a very good education.
Now, the other approach we use is we insert a buffer where we say, and this would be as if your child said, you said, I'm unhappy with your grades. The grading system in America, the top grade would be an A and then the bottom grade, it would be A, B, C, D. And then F and things. But if your child came to you and you said, you're not getting enough A's, and they said, well, I've got a solution. Let's redefine B as being an A. And, you know, I can end up fixing this problem if we shift the scale on it. And that's what we do with the buffer, is we say the project, I think I could get the project done in 12 months, but I'm going to commit to 24 months so that I can reduce the risk that I miss my schedule. And you have to think about that is whether that is a good economic choice. And that's really the issue I want to get into is are these strategies good economic choices for product developers? The first strategy is a good economic choice in manufacturing for repetitive manufacturing processes. Is it a good economic choice in product development? And I would say the problem with the first choice is we add value through innovation in product development, and you'll drive out the innovation. The problem with the second choice is a little more complex, and I'll try to depict this a little bit graphically, you know, sort of maybe more from a probability perspective. On this diagram, what I'm doing is I am plotting the completion time of the project as a cumulative probability. function. And what this is saying is there's a 1% chance I will complete by this time, there's a 10% chance by this time, there's a 50% chance by this time, there's a 100% chance by this time. I mean, it would be a classic application of probability to scheduling. And when I commit at a 50% probability, what I'm doing is I'm saying there might be a 1% chance it's delivered this late. A horizontal distance in this zone tells you how late you are. And the vertical distance tells you the probability that you will be that late. So the size of the purple area is actually the amount, the expected delay that I have when I commit at a 50% probability. And if our management is pressuring us to eliminate that expected delay, what we can do is we can shift up to a 90% commitment point, and we can say, let's commit at this point, and then that green area of expected delay is much lower. And this looks like, you know, this is just, you know, free money to us, right? Is that we get, but you're actually paying something. to reduce the variability of cycle time. The question you want to ask is, what currency am I spending to buy a reduction in variability? Because if the currency that you are spending to buy the reduction in variability is more expensive than the variability is, then you are hurting your economics. And in fact, when I use a schedule buffer, the currency that I'm spending is actually cycle time. It's cost of delay that I'm trading for. So when I end up extending the schedule by 12 months to make sure I have a high confidence, what I'm doing is I'm losing all of these opportunities to have delivered that product earlier in the process because I made the very conservative commitment. So there's a problem sometimes with using buffering as a strategy to deal with variability. Now, one of the other overall approaches we use, which is sort of the key topic, is robustness. We tend to believe that variability is bad and robustness is good. And, I mean, I won't survey you. Anybody in the audience? audience thinks robustness is bad, that it would be bad to be robust, that it has a negative connotation for robustness. Most developers do not. And I want to talk about two different types of robustness that we have. Robustness doesn't just come in one flavor.
And I call them, and I don't know if there's a standard terminology for this, but this is sort of a useful terminology for me, is that I contrast the idea of passive robustness with the idea of active robustness in a system. And in passive robustness, what you do is you make the system robust. You can think of... How would I assess robustness is a good way to do it is to think about it is if I hit the system with a perturbation, something that a negative effect, how much does the system end up changing when it ends up absorbing that negative effect? And so it's the amount of the consequences of a disruption of the system. And the standard ways we create this robustness in a passive sense is We increase the margin of the system. We increase the inertia in the system. We increase the redundancy in the system. And that produces a system that doesn't move as much when we have a negative impact. And an analogy I would use for this is you do a lot of sailing in France and things.
The America's Cup yacht races are a very famous race that takes place in sailing. Back in 1995, there was an extremely important upset in those races, which is when a very tiny country named New Zealand ended up beating the country that was spending the most money in the world on yacht design, which was the United States. And it's a very fascinating story. You know, you would end up concluding with your knowledge, if you read the story of what both teams were doing, you would say that the New Zealand team was using lean and agile concepts and the American team looked a lot more like a waterfall process. But I don't want to go into that story as much as just the overall notion of what is passive and active robustness when you're racing a yacht. is that passive robustness, you would say, well, you know, my problem is that the winds are variable and I cannot eliminate the variability in the winds. As much as I would like to control the winds, I cannot control the winds when I'm sailing a boat. And strong winds will cause the boat to capsize, and you don't win races if the boat capsizes. So you really don't want to have your boat capsize in strong winds. One way to avoid capsizing is to expose less sail area. So you say, we don't have as much sail up, therefore the force of the wind will be lower and we won't end up capsizing. But there is a disadvantage when you are racing a sailboat if you make the sail smaller, right? It makes the boat go slower and slow boats end up losing the races. So passive robustness, the problem with passive robustness is you are paying the cost of the robustness whether you have the variability or not. You've made the boat heavy, you've reduced the sail area, whether there is a strong gust of wind or a small gust of wind. So what we actually do in racing boats is we don't reduce the sail area. I mean, sometimes in very strong winds, you will. But what we do more is... if you make the sail smaller, right? It makes the boat go slower, and slow boats end up losing the races. So passive robustness, the problem with passive robustness is you are paying the cost of the robustness whether you have the variability or not. You've made the boat heavy, you've reduced the sail area, whether there is a strong gust of wind or a small gust of wind. So what we actually do in racing boats is we don't reduce the sail area. I mean, sometimes in very strong winds you will. But what we do more is... When the boat starts tipping over because of the wind, what we do is we move people to the other side of the boat. In English, it's called hiking out. And what you do is you counterbalance the torque of the winds at the time you have the gust. So in effect, what you're doing is you're putting in a compensating signal to cancel out the variability. But the beauty of active robustness is you're only doing it at exactly the time you end up needing it. Is that you're not constantly slowing the boat down because you've made the boat have so much inertia that nothing can tip it over. You just actively respond to the variability. in the system.
Now, active robustness is really about using feedback loops to stabilize the system, but there is actually a dark side to using feedback loops in a system, because sometimes the feedback loops function so well that you don't realize what is going on in the system, that the feedback loop will actually mask the deterioration of the system. And you can actually get overconfident in the ability of your system to perform. And an analogy I'd like to use to sort of explain that is the physiological phenomenon of shock. When you end up getting injured and you lose a lot of blood, how does the body respond to that process, and how does that evolve? Because when we talk about shock in English, we talk about there being two stages of shock. One is known as compensated shock, and the other is decompensating. And the distinction is compensated shock is where your body is able to compensate for the loss of blood that is occurring. When you transition to decompensating shock, that is... is what is very likely to kill you. So physiologically, the term we use is homeostasis, maintaining the same state in the body. And if your body starts losing blood, One of the most critical things it is trying to do is to maintain blood flow to the brain. Because if you don't get oxygenated blood going to your brain, then you start becoming stupid. And when you start becoming stupid, if you're injured, then you die. So we really try to keep the brain functioning as much as possible. Now, the way this is done is initially when you have trauma, what is happening is the blood volume is going down, but what we do is we preserve, the body preserves blood pressure, and that preserves blood flow, and that preserves alertness. This term AVPU, that's sort of the standard term in emergency medicine in the United States, where when you assess alertness, somebody is alert. Fully alert, or they will respond to words. You can get their attention by using words. This is alert, verbal pain, and unresponsive. So that's a scale by which you assess alertness in emergency medicine in the U.S. You probably use a similar system in France.
But how do we end up keeping ourselves, keeping that blood pressure up, even though the blood volume has gone down? So what happens is we increase the rate at which the heart is pumping. We raise the heart rate. That would be called tachycardia. The heart runs faster. We increase the quantity of blood we pump on each beat of the heart. We increase the respiration rate to get more oxygen into the blood. And we have a phenomenon called vasoconstriction, which is the veins in the body end up tightening in order to sort of to reduce the amount of blood that is distributed in areas that you don't need it. But there's a whole set of automatic responses that are occurring, and the consequences of these automatic responses is they end up maintaining the body functioning during this compensated period. Now, my point is, if your critical performance indicators are things like blood pressure and a alertness, you are going to see no deterioration in those critical performance indicators. You've got some bad things happening, which is blood is disappearing from the body. Your body is compensating for that by changing other things. But as far as you can tell from your critical performance indicators, there is no change taking place in the system.
In uncompensated shock, you no longer can provide enough blood to the brain and to the heart. So your heart rate drops, your breathing rate drops in the process, and that is when it just becomes a self-reinforcing process. And very frequently, the consequence of uncompensated shock is death. You really want to prevent people from reaching an uncompensated stage, and that's one of the main objectives of emergency medicine, is to end up preventing this cascade of deterioration that occurs in uncompensated shock.
So what would happen when you uncompensate is you reach a point where your heart rate cannot compensate for the problem, your blood flow ends up dropping, and then your blood pressure and everything else goes bad very quickly.
Now, the point I want to make... And the reason I'm using this example is because even though it looks like things are staying the same while you're in compensated shock, there are plenty of indicators of what is going on in the body. You will, you will, what is happening to heart rate, what is happening to breathing rate, what is happening to the color of the skin of that patient is telling you that you are in compensated shock and things. And I would argue this is absolutely the same in a product development process. We can create some very good feedback loops to make our development process more robust. And the feedback loops can be so good that all of our key performance indicators are staying the same. But we can be very close to falling off a cliff without knowing it. And the example I would give is, let's say a team is delivering products, they're using Scrum, all of their individual sprints are on time, completing everything that was in the sprint. So the key performance indicator is, did the sprint backlog get done? Was it done on time? Was it done without throwing extra money at it? everything looks good. But let's say the way the team achieved that is initially on the first sprints, they were working 40-hour weeks, and now by the fifth sprint, they are working 90-hour weeks and things like that. You have lost all of your margin in that system. Any variability could cause the entire thing to unravel on you, and you have indicators that you're losing the robustness of the system, but those are not the primary performance indicators that you're used to using.
My main point on active, I strongly favor active robustness as a better solution to the problems we deal in product development than passive robustness. But I would argue that if you are using active robustness, you really need to pay attention to how much margin do you have in the system. Because these control loops can be so good at maintaining performance that you're not aware of how much deterioration is actually taking place.
Now, let me give you another view of the world. So the classic view of the world is variability is bad. I could use robustness to end up counteracting the negative effects of variability.
A guy named Nassim Taleb, he coined a term called anti-fragility. He wrote a book, that's his book, Anti-Fragile. You may be more familiar with him for some of the earlier books he wrote. He wrote a book called Fooled by Randomness. The term black swan was one that he originated in his book, The Black Swan and Things. He's a good thinker on the problem of how variability ends up fooling us and how we misinterpret things happening in a stochastic world. And he would say that, you know, our traditional view of the world is that,
you know, fragility, when I perturb a system, when I put a negative force on the system, that that causes the consequences to go down. And our traditional view of the world is that the opposite of fragility is robustness. And Taleb's view, he said, no, that's actually not the opposite of fragility. The opposite of fragility is anti-fragility, which is systems that get better outcomes in the presence of variability. Instead of getting worse outcomes in the presence of variability, variability, they get better outcomes in the presence of variability. And that's a really interesting idea because in a world where you can't make variability go away, you actually would end up getting better outcomes if variability increased than if variability decreased. So the interesting question is, you know, so that's the high-level concept. Are there systems out there where increasing variability ends up increasing economic outcomes? And the answer, I mean, the answer in manufacturing would be, That's impossible. You must live on a different planet if you can even say something like that. But if you went to the world of finance, for example, you would see the attitude is completely different than the attitude in manufacturing.
I'll sort of show you this just using some thinking in the world of financial options and option pricing theory. So back in the 90s, Robert Merton and Myron Scholes won the Nobel Prize in Economics for the Black-Scholes option pricing model. And it was the first time people had rigorously quantified the value of a stock option. And I'm going to show you the basics. structure of how you quantify the value of a stock option is it's done as an economic expectation, which is nothing terribly new. It's like any expectation. An expectation is a probability function times the function you're trying to get the expectation of. But an option gives you a right to buy stock at a certain price in the future, and that is known as the strike price of the option. In English, it's known as the strike price. I'm sure you have a French term for it as well. So the idea is that... Let's say I have an option to buy Apple stock for $120 a share. If at the time the option expires, the stock price is at $80 a share, I'm not going to exercise that option. Right? Because I can buy the stock on the market for $80. Why would I pay somebody $120 for that stock? So no matter how far below the strike price the market price is, I only lose the money that I spent on the option. That is the only downside I have, the cost of the option. On the other hand, if the stock price is higher than the strike price, for every dollar it's above the strike price, I put a note. other dollar in my pocket. So if it's $10 above the strike price, I'm going to make $10. If it's $20, I'm going to make $20 and things. And this is what we call an asymmetric payoff. And it's really important to be able to recognize when there are payoff asymmetries in the problem. So you multiply the future price of the stock is actually a probability distribution.
Traditionally, we've used a log normal distribution as the best approximation, although the tails actually tend to be fatter than log normal distributions. But it's a... It's a distribution with a central tendency. It's going to look like it's not a uniform distribution from zero to infinity or anything like that. It looks a lot like a Gaussian distribution. I multiply this distribution by this payoff function. And I get the lower chart, which is the expected payoff. And what you'll see is that the area on the right side, which is the gains I'm expecting to make, versus the area on the left side, which are the losses below the x-axis, the right side area is higher, and that's why stock options have value. Now, the point I would make is, The reason the right side is higher than the left is not because there is an asymmetry in the probability function. The reason is because there's an asymmetry in the payoff function. There may be slight asymmetries in the probability function because it's log normal and things, but the real key is what's happening to the payoff function, that asymmetry. If you then said, okay, what would happen to the value of a stock option if you increased the variability of the stock price? The term we use in finance, we call it volatility, but it means the same thing as variability. So if I increase the variability of the stock price, will an option be worth more money or less money? is because there's an asymmetry in the payoff function. There may be slight asymmetries in the probability function because it's log normal and things, but the real key is what's happening to the payoff function, that asymmetry. If you then said, okay, what would happen to the value of a stock option if you increased the variability of the stock price? The term we use in finance, we call it volatility, but it means the same thing as variability. So if I increase the variability of the stock price, will an option be worth more money or less money?
Anybody think more money? Raise your hand. Yeah, we have enough bankers in the room that you would know the answer to that question. Yeah, absolutely. It's worth more money. And you can reason it through it pretty easily. I extend the tails of the distribution on the left and the right. If I extend it on the left, it produces no impact. If I extend it on the right, it goes further into the high payoff region, and therefore I would end up increasing gains. And here what I've done on this graph is I've just done that value calculation using two different levels of variability, two different standard deviations. The blue line is a low standard deviation. The red line is a high standard deviation. What you'll see is on the left side of the graph, the area is actually the same. It's just distributed differently. On the right side of the graph, the area is higher for the high variability situation. So if you had somebody from lean manufacturing sit down for a cup of coffee with somebody who knew economics, and the lean manufacturing person said, you know, one of the great lessons we've learned over the last 50 years in manufacturing, where the smartest people in the world are, is that minimizing... variability always increases economic outcomes, the reaction of the economist would be, you understand nothing about economics. You know, amazing that in 50 years you have learned nothing about economics. Because in fact, That case is true in manufacturing, but it is very far from being a generalization in the rest of the world. And in fact, I would argue that payoff asymmetries are extraordinarily common in product development. You're a pharmaceutical company developing a new drug and things.
If you have a successful drug, you might have a billion-dollar blockbuster drug. If you have a candidate molecule that doesn't work, and you screen it early in the development process, you may end up spending $10,000 in screening that molecule. What kind of molecules do pharmaceutical companies bet on? Do they bet on molecules that have been around for the last 50 years? That have been studied by every biochemistry department in the world? No, they have such low variability, there's no chance the tail of performance will go into the high payoff region. They bet on molecules, new molecules, things that have been discovered. Because those are the only ones that can turn into a blockbuster. But what they do is they don't invest a billion dollars in each of their bets. They terminate most of the bets. They may start 200 leads, but they very rapidly screen out what the options are down to a smaller and smaller group of leads. Now, where did the manufacturing people get this bizarre idea that variability was always bad? It actually came because the payoff function that we have in manufacturing is something called the Taguchi loss function. And the idea in manufacturing is any time you deviate from target in a manufacturing process, either on the upside or on the downside, you will create losses. Either you've got too much margin in the design and you're giving product away for free, or you end up having too little margin and you're experiencing failure costs. So if I told you the payoff function looked like this, It wouldn't surprise me if you ended up, and I asked you, what level of variability would you like to have in this process with this payoff function? You would say, I want to... Minimize variability. I want to operate at the top of that parabola. And that is absolutely correct in manufacturing. The mistake is to assume that that is the payoff function that exists in product development.
Now, and that mistake has led us down the wrong path, because when you look at an economic expectation, it's a probability function times a payoff function. We have focused, helped by the guidance of manufacturing, we have focused excessive attention on the probability function and not enough attention on the payoff function. So we are embracing techniques like SPC so we can end up reducing the variability in the distribution.
Reluctantly adopting ideas like Six Sigma and things because it reduces variability and reducing variability is supposed to be good. But the interesting thing is it's actually the payoff function that has the highest payoff for you as a product developer working on that. And that's what I really want to talk about is that how do we manage payoff functions in a way to end up making variability desirable for us. Now, the traditional approach of making the choosing low variability stuff was one of our strategies. The second strategy was using buffers, and I want to sort of depict that on the same type of diagram, is that when you look at economically what we're doing by choosing low variability choices, we're shifting from this red curve to the blue curve, and the problem is that we're losing that high payoff tail when we end up doing it. We're actually also losing that high payoff tail when we end up undercommitting because we're preventing the good outcomes. We commit at a level that is very conservative, which means we do not harvest the value of the good outcomes in the process. So traditional approaches aren't particularly maximizing the economic for us.
What actually does improve the economics for us in product development? It's the use of fast feedback loops in development processes. The same feedback loops we were using in sort of active robustness, if we focus on the speed of the feedback loop, we can actually create payoff asymmetries in product development. And I want to sort of illustrate this and how. how we do it with a little toy problem. You know, I'm going to offer you to play a game where you are going to win $3,000 if you pick the correct three-digit number. And the first way I would offer to play the game is I'd say, I'll charge you $3 to pick the three-digit number. Right? And you'd probably be able to say one chance out of a thousand of winning $3,000 is worth $3. I win, it cost me $3, I win $3 on average. That's not a terribly smart game to play, I think. But what if I changed the game and I said, you know, instead of paying $3 for a three-digit number, I'm going to let you buy, I'll still charge you $3 for three digits, but I'm going to sell you the first digit for a dollar, and then I'm going to give you feedback as to whether you picked the correct first digit. And then you can choose whether you want to buy the second digit. And I'll give you feedback on the second digit, and then you can choose whether you want to buy the third digit and things. What's the difference in the economics of the second game? It's obviously a big difference in the economics. Here what I'm doing is I'm graphing your investment versus the probability. There is a 100% chance you're going to buy the first digit. There is a 10% chance you will buy the second digit. And there is a 1% chance that you're going to buy the third digit. So your average investment is actually going to be $1.11. You're going to get a net gain, an average gain per game in doing it the second way of $1.89 and things. Huge shift in the payoff. But the important question is, where did that shift in the payoff come from? What has changed about this game? You still have a $3,000 prize. You still have a one chance in a thousand of winning the prize. You are still paying $3 to buy three digits. So some of the most important economic parameters of the game have not changed. What has changed is I've given you the ability to shut down the game after buying the first digit. And I've given you the ability to shut down the game after buying the second digit.
The way finance people would describe it to you, if we had somebody like Chris Matz or Olaf Matz, Mawson here, he said, oh, this second game has an embedded option. It is the value of that embedded option that is changing. You've given me two options to shut down after the first digit and to shut down after the second digit, and that's what's adding the economic value. And that would be precisely correct, is that those options that I have to shut down, and in fact, to the extent that you would say, how much would I pay to buy the second digit, is that in the second game, I would be willing to go ahead and pay up to $1.90 for a chance to buy the second digit, because at the third digit, I know I'm going to have a positive outcome. I'm going to have a 99 cent saving on the third digit on average. So the interesting thing, it's an embedded option. You can also look at it the way I was describing it a little earlier, is that I'm giving myself a chance to shut down a bad option early. The entire class of choices that have an incorrect first digit are bad options. You want to shut down that class of paths with as little investment as possible in order to end up getting the payoff. And the interesting thing to look at is here. is that, you know, what in this second game is actually creating those options? What's creating the options is I'm buying the information in small batches. Instead of buying the information in one batch of three digits, I'm buying it one digit at the time. And you should recognize that it is batch size reduction that is the mechanism that gives us the ability to accelerate feedback loops, and it is the feedback loops that we exploit in order to generate the payoff asymmetries. And I often explain to people in product development, I say, you know, your view of the world is that product development is like a horse race where you try to pick what the best horse to bet on is and you place your bets at the beginning of the race and then you watch the horses run down the track. I said, My view of product development is product development is like a horse race, and you place your bets at the beginning of the race, but you can change the amount of money you have on each horse after the horses have started running. And it is an enormous advantage to be able to shift your bets after the horses have started running in the horse. race and things. Because what you do is you shut down unproductive paths early, you end up amplifying the responses on the productive paths, and in effect, what we're trying to do in product development processes, if you look at that again at that payoff function curve, what we're trying to do is we're trying to shift the left side of this curve upwards by shutting down unproductive paths very quickly, and we're trying to raise the right side by amplifying the response that we end up making when we start seeing things turn out to be better than we thought they would be. And this is how we end up creating the payoff asymmetries, truncating the bad outcomes, amplifying the good outcomes.
Even with no prior knowledge of what the probability of those outcomes are, is that when they come, we'll end up taking advantage of them. We react to the changing facts and exploit the changing facts.
Payoff asymmetries are actually, my view is, this is not an accident of the universe that some things have payoff asymmetries. You can explicitly design your process to try to create payoff asymmetries.
you have a huge advantage over any other country in the world because you know how to use system day, right? Is that, I mean, if you look at, and this is just my interpretation from what I see of system day, is that system day does not try to anticipate, sit at the beginning and anticipate exactly what is going to happen and conform to the original plan that you came out with. It's sort of a recognition that the facts on the ground are going to change, and at each moment in time, there may be an optimum way to deal with it, and you may end up taking an indirect path to your goal, but if you keep your goal in mind, there is many ways to get to the goal. And now, during the break, you can educate me on what system day actually is and things. But this notion that I will be confronted with changing facts and I can construct a system that takes advantage of those changing facts is a really useful way to approach product development. Now... I would argue that you see none of this type of thinking in lean manufacturing. You see no options thinking in lean manufacturing. But the good news... is you're actually seeing it very prominently in terms of the way we're using lean in product development. Something like the lean startup approach of saying, I construct a minimum viable product, I treat it as a hypothesis, I either persevere or pivot. It maps in precisely to this sort of economic logic. I don't know what the outcome is going to be, but I can change my direction based on the new facts. Okay. Now, so just summarize a couple of takeaways from this, you know, what I think were the main points I was trying to make.
You shouldn't necessarily fear variability. I'm not saying celebrate variability. I think it would be as wrong to think celebrate variability is the correct answer as to fear variability. It's variability in the presence of asymmetric payoffs that actually make us money. So increasing variability in the presence of asymmetric payoffs is good. And increasing the payoff asymmetry is really good. Buffer with care. I think buffers are used very sloppily. You do cost of lay analysis and you figure out what time on the critical path is worth and you'll see somebody saying, I'll just insert an extra 12 months in the schedule to make sure sure I'm not embarrassed by the delivery date. You know, they may have $300,000 a month cost of delay. They're throwing away $3.6 million worth of cycle time, of profit, when they end up putting that buffer there. You want to do that with some quantitative scrutiny. Monitor your safety margins. Don't just monitor your key performance indicators in a process. Focus on payoff functions instead of probabilities. Six Sigma takes a statistical view of the world. You need to really incorporate economic thinking in the way you look at the problem. Accelerate your feedback loops. Buy information in small batches. The small batch purchases of information is what ends up creating this ability to create payoff asymmetries.
I always tend to think if you think the way a smart gambler ends up behaving, you'll be making the correct choices. Create options to bypass obstacles and exploit opportunities. Shut down the unproductive paths early. And then value good economic choices over conformance. And I would just sort of summarize this.
The other night I was saying, you know, there's a sort of, is there a difference in the worldview of people who think variability is bad and people who think variability is ex- And I would say on many dimensions, if you have the different mindset, you approach issues in a different way. So traditional view would be conformance to plan is always good. In the world view of variability is exploitable. I only want to conform to the plan when the cost of conforming to the plan is lower than the value of conforming to the plan. Traditional view is I'm going to do heavyweight planning at long time horizons. The exploitable variability is I'm going to do detailed planning at short time horizons, and I'm going to modify my plans as new facts come in. The traditional view is the planning has to take place up front. The new view would be we need to do it just in time. Traditional view is we should avoid all risks. New view would be we want to take rational risks. If the risk makes economic sense, we should take it. Traditional view, we want to under-commit so we end up looking good. The new view would be I want to dynamically end up changing my commitments. I'm going to, as the facts change, I might alter what my commitments are. And traditional view, obviously, would then look at these tools coming out of manufacturing like SPC and Six Sigma, and they would say, wow, these are miracle cures for the problems I have in product development. When in fact, some of these ideas actually do more damage than good because they're focused on things that are not causal to success. So that's just a little summary. I won't go through all of this. I want to just create maybe a minute or two for questions.
Any questions? Est-ce qu'il y a des questions?
Okay, well, you don't have to ask questions. So I'll just declare victory. The books you're probably aware of and things, I'm going to give a PDF copy of the presentation. It will be available somewhere, somehow, and things. So you don't have to worry about, did I copy all of the slides while he was talking? Okay, and I think it was videotaped by InfoQ or something. Okay, so, all right, merci beaucoup.