Vasco Duarte
Transcript
Bonjour à tous.
D'abord, je demande pardon, je ne pourrai pas faire la présentation en français.
Mais on peut parler après la présentation, on peut parler en français.
So good morning everybody. It's a pleasure to be here to talk about a topic that has generated a lot of discussion. And that was actually to my surprise because When I started talking about no estimates, I thought it was darn obvious. I thought it was so clear that there wouldn't be any question about it. But actually, how it all started was in 2005. My boss, I was talking to my boss about no estimates. I was presenting, hey, I found this great way to know where we are and when we are going to deliver. And it's darn accurate. It's really going to help us make decisions about what to include, what not to include. And I explain no estimates. I'll explain a bit of that today as well.
He looked at me very seriously and he said, sit down. And so I sat down and then he walked to the door, closed the door. And then he looked at me and he said, Vasco, never tell this to anyone. People will think you're crazy.
And this was 2005. And it took me all the way up to 2012 to have the guts to talk about it publicly.
And I'm supposed to talk closer to this. By the way, can you hear me in the back if I talk like this? No. Okay, good. Thank you. So, first of all, I would like to give you a gift. If you go to that URL over there, you can download the book for free. It's $27, so that's enough for you to have a dinner, right? I just paid for your dinner today. No.
So, but this will only be available today. So if you want to download it, you have to go there and sign up today. Otherwise, you won't be able to get it. So if you're watching this in the future, way beyond 2015, then sorry, it's no longer available.
But you will see the URL in other slides as well. So today I'm going to talk about 10 principles that I think are important for software development that can be derived from what I call no estimates. Now, many of you will be saying, so isn't no estimates just like one thing? And no, it isn't. And the goal is not that it should be. That's not what no estimates is about. No estimates is a space for us to discuss what works in terms of helping us predict when we are going to deliver things that matter to the business. That's also one of the things I talk about in the book, which is that we should be talking about the things that matter to the business, not how much each thing costs, but rather which things should we be focusing on.
By the way, how many of you love the estimation process? Could you get your hands up?
How many of you love to have those long, winded discussions about whether a story is a three or a five?
All right, so this is not for you.
But no estimates is for you if this is how you feel when you enter an estimation meeting. You go like, what?
Again? Couldn't we just, you know, get on with the work? Right? Also, no estimates is for you if estimates feel a little bit like this job.
Right? Estimates are kind of booby-trapped. You enter in a room and you know that something will explode and you hope it's not you.
So if you feel that way, no estimates is also for you. But for me, the most important aspect of no estimates is that no estimates is for you if you would like to make people happy. Isn't that like simple? The reason we develop software is to help somebody get a job done, whatever that job is, right? Maybe we're trying to help them, you know, do simple things like launching a space rocket or complicated things like calculating the tax amount for the yearly tax return. Right? That's what we like to do. That's why many of us enter the software business. Because we wanted to help other people, sometimes ourselves, right? Sometimes we are the person we want to help. So here's the thing. I don't know how trains work in France. I don't know if you guys have... Trains that are always on time.
Almost everywhere I go people laugh when I ask that question. I don't know what that is, but I thought the TGV was always on time. I thought it was only Deutsche Bahn that was late. But then I started traveling around the world and it seems that trains being late is a very common occurrence. In Finland, where I come from, we have a very good train company. Basically, they only face four problems. They only actually have four problems in running trains. It's winter, spring, Summer and fall. Those are the only four problems they have, right? Summer, the trains overheat. In winter, the ground is frozen, so the trains can't go as fast as they could. In the fall, there's just too much leaves on the track, so the trains take a long time to stop, so they have to go slower. And in spring, I don't know, maybe they're all on vacation.
But here's the thing, this train here, by the way, who knows what this train is? Shinkansen, exactly. This train here, let me see the numbers because these numbers are interesting.
This train here was in 2014, the whole year, the whole year, right? This train was late 54 seconds on average. And that includes natural catastrophes.
Right? So they figured out how to solve this problem. In the 90s, let me see what year was that? 97. This train was late 18 seconds, less than 20 seconds on average. The whole year. How did they do it?
How did they build a train that is always on time?
How was that possible? So before we go there, think about this. When you go to the train station and you're going to get a train and you have an important meeting, say, in Paris, and you're, I don't know, Lille or Lyon or Marseille, and you want to get the train to go to Paris for this very important meeting. You don't want to miss the meeting, right? So you're going to look at the history of how well the trains are on time and you're going to think, hmm, should I go one day before just in case, you know, if the train breaks down I can always walk out and rent a car? Should I go five hours ahead of schedule so that if the train is late, I still, I will have time to reach my very important meeting without missing it? You have to ask these questions. Again, where I come from in Finland, I always took the train earlier than the one I thought was enough to get there on time. And the reason for that was I knew that it was likely that my train would be late or even not show up at all.
So I was nervous. every time I entered the train station. I had to make decisions that had nothing to do with my life, they had to do with the freaking train company. It was not about my business, it was about the train business. I was nervous every time I was taking the train.
On the other hand, if you lived in Japan and you were going for a very important meeting in Tokyo and you had one of these, the only decision you would need to make is do I go in the train I need to take or do I go half an hour earlier so that I have time to visit that manga shop I've been meaning to visit for the last few years. And you know what? You would be relaxed. Because you knew the train would be on time.
You would be without worries about perhaps being left in the middle of the track, in the middle of nowhere, with the train being stopped because of too many leaves on the trains. So this is what they did in Japan to make a train that is always on time. The Shinkansen, which the original name was New Trunk, right? That's where the Shinkansen comes from. They built a completely new train line so that this train would not need to mix with the slower trains. They built a new system that would be designed to get trains on time to the stations where they were going to be. They didn't try to improve a broken system, they built a new system. And this is actually the first principle of no estimates. If you don't trust the system you work in or the processes you work with, change the processes. Don't try to get them slowly better. Ask, what is the fundamental cause for the problems we're facing? And if being on time is very important for you, for example, you might be working on tax handling software.
Being on time is extremely important. Then you build a system that delivers on time. You don't try to estimate better. That's a broken system. It's not going to get better, no matter how much we try. So that is principle number one.
This is a very interesting story. This is a project, a very important project if I recall correctly, and it was basically divided into two phases, right? The first one, which I call development phase, and the second one, which some people call quality assurance, I call it desperately testing and fixing phase.
And look at how long the phases are.
This is the real project. So we have the development phase that is about a bit less than one third of the whole project duration. But look at this blue line. The blue line is the number of O. open defects, right? It traces up and up and up and up and up. And if you're here at this point, right here, at the top of that curve, the question is, when will that project be delivered?
Anyone care to guess?
Never. Yeah, that would be a fair estimate, wouldn't it? But then look what happens afterwards. Boom! The blue line drops. What happened there? Well, we're not magicians, right? So what we did is that we went to Jira and we started removing stuff out of the project. We won't fix, we won't do, we won't fix, we won't do. And, you know, all down. And then, of course, we got a lot of bugs closed at that time, and then the quality started improving, or did it?
It didn't really. The more we tested, the more problems we found. And guess what? That's what testing is there for, right? So that you would find more problems is not really a surprise. But wait, this project actually ended. They released the project. How did they do it?
Customers keep testing.
Yeah, so delegate testing to the customers. Yes, that would be one way.
So actually, the reason why this project was delivered was very simple. When I was working in this company, we had many projects. It's ongoing. And when this happened, I was looking at the history of other projects. And here's what I found. Large projects had an average three months desperately testing and fixing phase. Small projects had an average 90 days desperately testing and fixing phase. Medium-sized projects, also three months desperately testing and fixing phase. And I don't know about you guys, but I don't believe in coincidences. So when we actually analyzed what was going on, we found out that the reason why those projects were released was because, you know, management would just be fed up. They would come down from the corner office, they would go to the team room and say, guys, either we release now or we cancel the project.
So that's how we meet schedules in the real world. It's about somebody losing their patience or losing their money and saying, no, stop it, let's go live. So what was that about being accurate about estimates again? This is just one example. Of course, you know, you have your own examples in your own companies and perhaps you do it a lot better than this company did. But here's the thing. When you have projects where you have desperately testing and fixing phases, the delivery schedule is not dependent on any estimates. The delivery schedule is depending on very, sometimes gut feel, business decisions. It has nothing
So maybe we were on time with this project. Who cares? If we were on time, it was a coincidence. It was not by design. It was not that we were very good at estimates. It was because somebody said, you know, enough. Let's get the product out the door.
So this brings us to principle two, which is one of the biggest problems here was this desperately testing and fixing phase because it builds uncertainty. So principle two is shorten the feedback cycle. And this is applied at all levels. It's applied at the business level, it's applied at the project level, it's applied at the sprint level, it's applied at the user story level, it's applied at the daily level. And the reason is simple. Feedback will inform the decisions you need to make. It was not by design. It was not that we were very good at estimates. It was because somebody said, you know, enough. Let's get the product out the door.
So this brings us to principle two, which is one of the biggest problems here was this desperately testing and fixing phase because it builds uncertainty. So principle two is shorten the feedback cycle. And this is applied at all levels. It's applied at the business level, it's applied at the project level, it's applied at the sprint level, it's applied at the user story level, it's applied at the daily level. And the reason is simple. Feedback will inform the decisions you need to make. The longer you go without feedback, the worse your decisions will be. And guess what? If you shorten the feedback enough, you don't actually need to estimate anymore. You only need to ask, are we ready to deliver today? That's the only question you need to ask. And that's actually the question I ask when I work with teams that are delivering products is, can we deliver today? And guess what? That question, which is of course meant to cause reflection, to cause people to think, why can't we deliver today? What's preventing us from delivering today? That gives us the certainty we need to meet the schedules that are imposed on us, whether by management or the market. It's about shortening the feedback cycle. You can be ready to release every day. You will always be on time, no matter what the time is.
So you can start to build your own Shinkansen, your reliable system. Then another question that I get asked all the time is, can we be accurate with our estimates? Well, some people will say that yes, we can, and the only reason you're not accurate is because you're irresponsible and playing with other people's monies. Yeah, I get told that all the time on Twitter. It's fine. I've gotten used to it. After all, it's been three years.
Here's one excellent video, the video by J.B. Rainsberger.
It's on Vimeo, it's from Ordev 2013. It's a very short video that I would definitely recommend. And he makes one very important point, and he talks about accidental complication. Here's the point. He says that the cost or duration of any piece of work that you need to do is a product of the essential complication, i.e. Is it a simple thing like launching a rocket or is it like a very complex thing like doing your taxes? That's the essential complication, the natural complication of the problem you are trying to solve. Versus or times the accidental complication. And accidental complication is what JB calls us not being very good at our jobs. Right? So here's One example. When you started developing that product five years ago, was it easier or harder to work with the code base than it is today? In most cases, it will be harder. In some cases, if that's the case with you, congratulations, it will be easier. But the point is that it's not the same, right? Now think about this. In Scrum we're asked to do story points estimates, which is relative estimation, right? We're comparing the essential complication of a particular story with another story which we believe has the same essential complication.
But guess what?
The accidental complication, i.e. We not being very good at the job, is sometimes so large that it dominates, it dominates this equation. Just a very simple example, when we started developing a website, we were perhaps, you know, using just the basic framework. Things were clean, we knew what we were doing. A few years later, we're still delivering that same website, and unfortunately this is the case in one of the companies that I'm working with. You might be even rewriting the software for the third time, because accidental complication had gotten so high that you think it's better to rewrite everything from scratch than to try to change the old software. Anybody developing? PHP websites over here? You probably know what I mean, right?
So here's the thing. If in one company, this is just one example, but I'm sure you can relate. If in one company, at some point, you decide to rewrite the software because accidental complication was so large you didn't want to pay for it,
How about that relative estimation technique? How does that work again? Right? You're trying to relate the size and the cost and duration of a story now, when you're about to rewrite the whole system, with the size or the cost and duration of a story, you know, a few months ago when you were, you know, knee-deep in mud.
How does relative estimation work again? Comparing similar things. But if accidental complication, which changes over time, dominates this equation, then the only logical consequence is that relative estimates have zero value. In that context where accidental complication is changing and becoming the dominant part of what it takes, the cost and the time it takes to deliver something.
So this is a problem for us. But this is not the only problem. See, the thing is that some people take cues very, very seriously.
So how many of you have 100 or more items in your backlog? If you could raise your hand. See, that's what I mean, right? That item number 100, which was promised to be delivered in three weeks a year ago, how's the estimation for that coming?
But there's other problems, right? Because that's just one particular site or consequence of queues. The other consequence of queues is, you know, two weeks ago you started working on something and you thought it was going to take two days, but that was two weeks ago and the item has been sitting on somebody else's queue. For two weeks.
How can that estimation be accurate?
Queues can have such a large impact on your organization as to completely destroy any value that you might think estimating the cost and duration of an item has. I don't know about you guys, but I've removed items from queues, also known as JIRA, that were five years or older.
Right? So what's the point, again?
So the point with queues is queues will always have a very large impact on how work flows through your organization. And it will have such a large impact that trying to estimate one single item of work has no meaning until you actually start executing. And when you start executing, queues have such a large impact in your organization that actually is much more useful to look at the cycle time or lead time, depending on what you're trying to measure, of items in the past than trying to say how long will this item take.
If queues are so important in how long work spends in the system, trying to estimate the size of an item is a very small fraction of deciding when that item will be ready.
But yet some people argue that we can be good at estimation. So let's just review some data on that.
This is the chaos report. 80% of projects in 2004 were late or failed. Again, chaos report. This was 95, but this was the only breakdown that I could find. Look at the 51 to 100. 100% or larger time overruns. That's 68% of projects are 51% or more late.
Chaos Report 2009, average time overrun 63%.
Chaos Report 2011, average time overrun 63%. But okay, just in case you don't like the Chaos Report, because of course people will say, well, Chaos Report is not real science and so on. Okay, fine, let's look at other surveys. Gartner, project failure in 2012. By the way, failure here means total failure, not just being late. It means like delivering nothing. Small IT projects 20%, large IT projects 28%.
Failed. Not even late. There's late projects here on this big bar of 72%.
Capers Jones, a very well-regarded person in the defense business. This was written in the Journal of Defense Software Engineering. Of large systems that are completed, about 66% experience scheduled delays and cost overruns.
This was Scott Ambler's project success survey. Traditional projects, 53% failed or changed. But we're agile, right? We're much better than that. 40% of agile projects failed or challenged.
And this is the one that really got me. This is from McKinsey, a survey done or study of large-scale IT projects done with the University of Oxford in 2012. 17% of large IT projects go so badly, so badly, they threaten the very existence of the company.
Think about that for a second.
Large IT projects go so badly, they threaten the existence of the company.
So I'll publish more. I'm going to write an article at the noestimatesbook.com about that because there's a lot of people saying, yeah, this crap about us not being good at estimates, that's just crazy. Of course we're good at estimates. You know, you're just doing it wrong. Well, actually there's no data to support that claim. There's no data to support the claim that we can be good at estimates.
So, when you look at this stat, 17% of large IT projects threaten the very existence of a company, ask yourselves, would I be ready to bet my company on the use of a technique that has such lousy track record? Because that's really the question we're asking. I mean, when you ask a software developer, can you be good at estimates? He's thinking about, can I write this story or the code for this story in the time that I thought it was going to take to write? He's not thinking, can I write this story so that I don't make the company go bankrupt? Right?
But the managers who are asking the software development teams to do their estimates this way should be thinking, am I asking a developer? or really to make business decisions that might make my business go bankrupt? Isn't that my responsibility?
So here's the principle. Believe the data, not the estimates. Because here's the thing. Even if you could be good at estimates, it makes sense to look at what a good estimate is.
Conte, Dunmore and Shen proposed that a good estimation approach is one that provides estimates that are within 25% of the actuals 75% of the time. Let me break that down for you in ways that I even understand. I go to the bank and there's a person on the other side of the counter that says, hey, I have a great investment for you. You know, you'll give me 100,000 euros and I promise, I promise that I will give you 75,000 euros back. I will only lose 25,000 euros of your money and that's with a 75% chance. It might go a lot worse, but it's not so likely. Would you ever do an investment like that? Would you believe a banker if they would tell you that? Now you would go like, okay, you're fired, right? So why are we using a technique that has, even as a good estimate, such a bad result? of the time. Let me break that down for you in ways that I even understand. I go to the bank and there's a person on the other side of the counter that says, hey, I have a great investment for you. You know, you'll give me a hundred thousand euros and I promise, I promise that I will give you 75,000 euros back. I will only lose 25,000 euros of your money and that's with a 75% chance. It might go a lot worse, but it's not so likely. Would you ever do an investment like that? Would you believe a banker if they would tell you that? Now you would go like, okay, you're fired, right? So why are we using a technique that has, even as a good estimate, such a bad result? It's a lousy investment, guys. Why are we still using this?
So principle four, use alternatives to estimate-driven decision-making. Of course you can use estimates to make decisions. I mean, nobody can prevent you from doing that, and you might even be doing that subconsciously. But try to find alternatives. What else could we use besides estimates to make decisions? Here's another thing. You know what? This project, this list of projects, 17 projects from 2001 to 2003, the average delay was 62%. But, okay, some projects were fairly on time. You know, number one was even ahead of schedule. Number two was on schedule. Number three was very close to being on schedule. But look at project 17. 200 to 245% late. By the way, the bluish bar is when the requirements, sorry, when the project was approved and the Bordeaux bar is when all of the requirements were known, or we thought they were known, obviously. By being 200% late, we know they weren't known. But here's the thing, if you ask me what project 1 was, I have no idea. But if you ask what Project 17 was, Project 17 changed the company. It created a new business model, it entered or helped the company enter a new market, it created a Current revenue business model, which is, if you're running a business, it's actually quite good because your revenue is growing slowly, but it's continuously growing because it's cumulative. It was basically a subscription-based service. When I left the company, number 17 was generating more than 50% of the revenues for that company.
So here's the kicker. Even if you're good at estimates, they have no meaning business-wise. Because you can be terrible at estimates and still be hugely successful. So not only can we not be very good at estimates, at least that's what the data tells us, I'm also telling you that being good at estimates is missing the entire point.
It makes no sense. It helps you to do nothing except perhaps being within 25% of actuals 75% of the time, if you believe the definition of a good estimate. Because this company did not need to be good at estimates to completely change their business model. In fact, the worst offending project in estimate terms was the most successful project.
In another company, we were working with a team that was delivering basically e-commerce systems, right? You know, you go to the web, you buy something. And the team was overwhelmed with work to do. They had so much work to do, they were totally lost. They were, you know, demotivated. They had, and I kid you not, hundreds of items in their backlogs that were like small improvements, you know, change the color, move the button, that kind of thing. That is unfortunately quite common in many e-commerce teams. But the product owner decided to spend one sprint in that particular company that was two weeks. He decided to spend one sprint doing what he called an innovation sprint. So obviously, I was working with them, we decided to focus on something that had business impact. We asked, okay, so what could we do that would have real business impact? And we came up with an idea. The idea was very simple. It was sharing whatever you bought. It was sharing it on Facebook, right? Or what you were about to buy, share it on Facebook. Because the idea was, well, we'll get more traffic. More people will come to the site. And, you know, we will increase the number of sales. We will increase the conversion because it's kind of social proof recommendation and so on. And somebody came up with the idea that said, hey, wait a minute. I also, you know, I use the website when I'm on the bus, but I don't want to buy with my credit card on the bus. So what if we just send, you know, myself a reminder, right? Like just by email. Okay, fine. Let's fit that in. So the team was happy, they were playing with things they liked, including mail servers, which is apparently very sexy for e-commerce developers. And they were developing that, and after two sprints we went live.
Sorry, after two weeks, one sprint, we went live. And here's the thing. Two days after we went live, we were making money. Two days after we went live. That's two weeks and two days after we started working on something. If you had asked that team, how long will it take to generate this, you know, sexy, nice, shiny... Share on Facebook or Twitter or Instagram feature, they'll probably, you know, say like, you know, whatever, like two sprints, three sprints, whatever. But we didn't have that time. It was an innovation sprint. We got allowance for two weeks. That's what we did. We went live. Two days later, we were generating money. I got an email earlier this year in summer from the product owner that was working at this company. That single feature had generated 250,000 euros in revenues in the first seven months of the year. That's almost half a million per year. Not only that, but the conversion of this feature was 5% compared to the average conversion of the website itself, which was 0.03%.
Two weeks totally transformed the value that one team was delivering. Now imagine what we could do with whole businesses if we would focus on value rather than focus on trying to get the estimates right.
So here's principle five. If you're going to test, test for value first, then test for functionality. Because value is what you're there to deliver. It's not that it works better, it's that it has business impact.
So some of the techniques that you could use for this are, for example, impact mapping and story mapping, which give you options that instead of forcing you to do a bunch of things you already committed to because you estimated them, it actually allows you to make decisions as you go. And I would definitely recommend getting started with impact mapping and focusing on value rather than focusing on the costs of the items that you're trying to deliver.
So obviously the most responsible among you will be asking, but can this work in real companies? I mean, you know, e-commerce and such, but I work for clients that, you know, they want the bid, right? They want the price tag when they're buying software from me. Can this work with real companies? So I interviewed CEO Sven Dietz who works for a company in Germany called Zeitgeist. They basically do websites and e-commerce systems for other companies. So they are a contract software company. They get bought time and materials mostly and they deliver software that helps other companies'businesses. And in that video Sven explains how they went from Going through the normal bid process, you get a list of requirements and then you try to figure out how long will that take. And actually went into a no estimates world where, and here's the kicker, they are now able to bypass the bid process. They don't even need to bid anymore. And the reason is simple. When they get a bid request, instead of sitting down and trying to estimate everything, they write their software. And in a few days they go to the customer and say, here's what we did in the last few days. Would you like us to do more?
They show actual running software. Now, that's a novel concept, right? software over contract negotiations. But that's what they did. And the interview is, you know, I can't do it enough justice. You would have to watch it. But in the interview, he explains how they do that and how they actually bypassed a large bid process from a large European company. And they started delivering software immediately. He says, this is the thing that really caught my attention, that if you count in the time that they spend it estimating, writing up the bid, doing contract negotiations, now they work without contracts, so they only get paid if the customer likes them. They deliver working software instead of writing up bids. I bet some developers would prefer that. But here's the thing. They lose less money now that they don't have contracts than they did before when they had contracts. Because here's the thing. They spend 30% of their time writing up bids and negotiating contracts. 30% of an average project was writing up bids, doing estimates, negotiating contracts. They don't need to do that now.
This is one example of how no estimates is not only working in real companies, but it's transforming their business model.
So obviously the principle here is estimation is waste. Notice that I'm not saying you shouldn't estimate, I'm just saying it's waste. And if you are privy to the lean principles, the idea is to reduce waste. You might not always be able to eliminate it. Right? Reduce its impact on your business.
That's a very important aspect, that even if you have to do estimates, don't try to manage your business based on the estimates themselves.
And principle seven is measure progress only through validated running software. This is what Sven did in his company.
So, we're back to the definition of a good estimate.
The reality is, and this is a graph that was published by Steve McConnell in his book Software Estimation, which ironically tries to tell you how to be better at estimates. This is a company that he lists. And if you look at the definition of a good estimate, being within 25% of the actual results, and you look at the reality, we can see that this is not really happening. This is the number of projects. Every cross is one project. The y-axis is how many days it took to complete the project, and the x-axis is how many days they were estimated to take. But that we are bad at estimates, I already covered that. But I wanted to cover a very specific aspect of being bad at estimates right now.
Look at those two crosses over there. The top left corner.
These two crosses, so the first one was supposed to take something like seven to nine days, and it took 260 some days. And the second one was supposed to take nearly 20 days and it took almost 240 days. This, what I would call the outliers, can actually destroy your company, right? So we need to be aware that when we are investing in estimates, we're actually running risks of this size. Now, of course, if we look at this, we can say, well, you know, there's a lot of other projects that are not that much more. I mean, this one only took 120 days when it was supposed to take one day, right? It's not that bad.
But this is reality with estimates. So when you come to the office, and somebody asks you, well, I'm going to go ahead and ask you to deliver 10 stories the next sprint. Obviously, the sane reaction would be to say, what? But let's look at what actually is happening. This is one of the aspects we cover in the No Estimates workshop, which is that We work within systems, right? This is our system. This is one team, by the way, 21 sprints, and the blue line is their throughput or number of stories delivered per sprint, and the other lines are just control lines. But you see that they are extremely predictable. So here's the thing. Our goal should not be to estimate how long something takes. Our goal should be to understand the system we work within. Because guess what? It's going to be extremely predictable if you collect data. And I'm going to go on a limb here and say that some people in the room are saying, well, but our work is different, right? Our work isn't really predictable, right? I mean, you know, sometimes we have small things, sometimes we have big things. Well, I hear that. A lot. But here's the thing. When I measure systems, when I go into companies and I work with them to make work visual, I measure the actual work ongoing and measure how much work is being delivered per unit of time. Guess what I see? I see this kind of graphs. All the time. So here's the thing, we already have the data, if we measure it, if we collect it, we already have the data that allows us to be as predictable as the Shinkansen. We have that data. So we should just be Measuring it. We should be looking at our organizations from a systemic point of view, measuring our cycle times and our lead times.
Because in the end, we want to avoid situations like this. And this is only one of the dysfunctions that we have to face in our organizations. There's four dysfunctions. See how well I estimated the number of dysfunctions I would find?
Estimate bargaining is just one of the problems we have to deal with, right? I mean, you know this by hearing phrases like, well, seriously, two months, guys? When I was a developer, we did it much faster. How come it takes you two months? Of course, you should say, well, actually, the reason why it takes us two months is because it took you shorter time, right? Because in the end, we want to avoid situations like this. And this is only one of the dysfunctions that we have to face in our organizations. There's four dysfunctions. See how well I estimated the number of dysfunctions I would find?
Estimate bargaining is just one of the problems we have to deal with, right? I mean, you know this by hearing phrases like, well, seriously, two months, guys? When I was a developer, we did it much faster. How come it takes you two months? Of course, you should say, well, actually, the reason why it takes us two months is because it took you shorter time, right? That's the accidental complication. But of course, there's internal politics, right? Which projects get approved? Anybody has an idea?
The most valuable projects, right? But how do you get a project to be valuable?
You negotiate. Hey, you give me that feature, I'll put it in my project, and then you give me that feature, I'll put it in my project, and then I go to the board and say, I have all of these features to deliver, and by the way, they agree with me. Right? Internal politics. Get projects approved by how big they are. Perhaps even too big to fail.
Blame shifting. How long will it take you to deliver that feature? Two weeks? Fine. Two weeks later I come, hey, you said two weeks. Right? Now it's your fault you didn't deliver. Blame shifting is a very common dysfunction with estimates. Late changes and of course the sunk cost fallacy, also known as we've already invested 5 million into that project, what's 1 million more?
So the system where you work has predictable outputs. The goal is to learn to understand the system. That's really one of the key things I focus on the workshop because unfortunately, by the way, this is nothing new, right? Guys like Deming were talking about this in the 20s, 30s, and he even wrote books about it in the 90s, but nobody paid attention. The thing is, we all work within predictable systems. Systems are perfectly designed to produce exactly what they produce. So if you know what they produce, you can be quite sure they will continue to be producing the same amount of things over time. Unless you change them. And that's why understanding them is so important.
So, of course, the next obvious question is, but can we do better? So this is a story, I've told this story quite many times, many of you perhaps have heard this story before, but just quickly, this was a long project and we were trying to figure out what is the most accurate metric in terms of predicting what a team can deliver over a long project. This was 24 sprints. So we asked, comparing story points with number of stories delivered over 24 sprints, which one is a better predictor of the output of the team over those 24 sprints? And we asked this only after three sprints, and then of course we asked again after five sprints, because we had more data. So we were thinking, hey, more data, more accurate. Let's try that. Now, I do want to make a disclaimer. This is just one case.
If you go to this URL, you will get 21 more, I think 22 more by now, projects that have data for all of the projects. So you can actually look at the data and do your own experiments. And I would encourage you to do this same experiment with your own projects. So after just three sprints, we looked at the story points predictive power and we found out that 349.5 story points were actually completed, but if you use the average of the first three sprints, you would have predicted 418 story points. That's 20% more prediction than Actually delivered, which means that you would be 20% late. But that's okay. 20% is actually better than the definition of a good estimate, by the way.
How about just counting the number of stories? So the true output was 228, the predicted output was 220. That's an error of 4%, but on the right side, right? So you would have been ahead of schedule. Well, that's great, but I'm sure it gets better, I mean story points get better after five sprints. So we did that and yes indeed they got better, 13% off, which is pretty good. Right? I mean if you could always be within 13% of your predictions, you would probably be a millionaire by now.
The number of stories predictive power did not change. By the way, this is just purely coincidence. It could be higher or lower, but in this case it didn't change and it was still 4%.
That was just one, but look at this. This is what I call an estimation magic experiment done by Bill Handel and out of Microsoft. They estimated their whole backlog, one, three, and five. Those were the story points they used, which is already much better than most companies. And they plotted their release and it was the 20th of October 2014. And this is using story point estimation. 1, 3, 5 to size all of the stories. And then they did the estimation magic, also known as Excel magic, and changed all of the estimates. A 5 became a 3, a 3 became a 2, and the 1 stayed a 1. They're not even relatively sized the same, right? It's like totally different numbers. Release date, 14th of October. Six days difference. What? You're telling me you just changed the estimates and the release date didn't move more than six days out of almost a year? Yeah, but it gets better. Then they changed it to only one. What happens if we remove the estimates? You know, Excel allows you to do that. Great stuff. Release date, 29th of September. 29Th of September. All, all estimates. Both the real estimate, the fake estimate, and the no estimate were within three weeks of each other. Three weeks of each other. That's an error or a margin between the optimist and the pessimist projection of 8%.
Compare that to the definition of a good estimate. Obviously, Bill, who was very generous to share this data with me, said after a few emails we exchanged, said, at that point, I stopped thinking estimates were even important anymore. This is really what I'm asking you to do. Do the same for your own projects. You might actually come to the same conclusion that Bill did. And really, a good estimate 25% of the actual 75% of the time, is that what a good estimate is?
I got this email shortly after I released the book. One of the readers was talking about their own experiment. And this is what I would encourage you to do, your own experiments. Don't believe me, believe your data. They looked at a project of more than 50 sprints, or 50 sprints actually, and they discovered that if they had used no estimates, they would have gotten 15% more accurate estimates. And not only that, but out of 50 sprints, at no point was the story point estimations better than just counting the number of stories.
So principle nine is don't bet your company on methods with this poor track record.
Hope is a bad management strategy. Don't hope you will get better at estimates. There's no data to support that.
So in the book, I tell the story of Carmen. Carmen faces a very difficult situation. She has a project that needs to be delivered on time. It's a high-stakes project, and of course, won't be a surprise, the time that they have available for the project is limited by the competition. Because, by the way, that's a side effect of the bidding process for software projects, is that you have to be faster and cheaper than the competition, because, especially in public projects, you get evaluated on how quickly and how cheaply you deliver software. Right? So in the book I talk about this, what I call the seven steps journey to no estimates. This is just of course a metaphor, it doesn't need to be seven, it can be three or it can be 27, it's up to you. But that's the story that Carmen goes through. And I explain how you can go from a markedly waterfall project that is already ongoing and transform it so that you can use no estimates with that project and actually deliver more transparency to the stakeholders. Instead of closing down your options, you generate options when you give transparency.
So the book is available now. I've added other things. So there's actually two mini books. One is about capacity planning with no estimates, because that's obviously a question that you will get asked immediately. Okay, yeah, yeah, yeah, fine, no estimates, but how do we do capacity planning? And then of course the second one is fine, but we have to write contracts. We need to put the time there. Well, there's different types of contracts. So look at that. Look at the different types of contracts that you could use and still apply no estimates. But no estimates is really a practitioner's problem. It's not about theory. So I interviewed nine different people that tell their stories. About how they went from estimation to no estimation. How they apply no estimates to their own business. There's two CEO interviews in those nine. So this is not about playing with other people's money. This is about people who actually run businesses with their own money using no estimates.
Finally, the tenth principle. This is something I borrowed from Deming, so I didn't come up with a basic idea. But here's the thing. No matter what you do, no matter what you think no estimates is, if you want to become better at your job, The transformation starts with you. Don't think your manager will start it or their manager will start it. Don't think your developers will start it because actually having worked with many developers, they also go crazy when you say, why are we estimating? I worked with one team and they said, hey, but we need to estimate for this sprint. We need to make a commitment, so we need to know how much we can deliver. And I said, yeah, fine, if you want to estimate, go ahead, but I don't need it.
The transformation really starts with every single one of us trying these ideas out, seeing if they work for us.
And even if they don't work, imagine what you will learn once you start to measure how your system works. You might actually be surprised.
Thank you very much.