top of page

Search Results

59 items found for ""

  • One Law. Two Equations.

    This is post 4 of 9 in our Little's Law series. In the previous post, we demonstrated how the two different forms of Little's Law (LL) can lead to two very different answers even when using the same dataset. How can one law lead to two answers? As was suggested, the applicability of any theory depends completely on one's understanding of the assumptions that need to be in place in order for that given theory to be valid. However, in the case of LL, we have two different equations that purport to express one single theory. Does having two equations require having two sets of assumptions (and potentially two types of applicability)? In a word, yes. Recall that the L = λW (this is the version based on arrival rate) came first, and in his 1961 proof, Little stated his assumptions for the formula to be correct: "if the three means are finite and the corresponding stochastic process strictly stationary, and, if the arrival process is metrically transitive with nonzero mean, then L = λW." There's a lot of mathematical gibberish in there that you don't need to know anyway because it turns out Little's initial assumptions were overly restrictive, as was demonstrated by subsequent authors (reference #1). All you really need to know is that--very generally speaking--LL is applicable to any process that is relatively stable over time [see note below]. For our earlier thought experiment, I took this notion of stability to an extreme in order to (hopefully) prove a point. In the example data I provided, you'll see that arrivals are infinitely stable in that they never change. In this ultra-stable world, you'll note that the arrivals form of LL works--quite literally--exactly the way that it should. That is to say, when you plug two numbers into the equation, you get the exact answer for the third. Things change dramatically, however, when we start talking about the WIP = TH * CT version of the law. Most people assume--quite erroneously--that this latter form of LL only requires the same assumptions as the arrivals version. However, Dr. Little is very clear that changing the perspective of the equation from arrivals to departures has a very specific impact on the assumptions that are required for the law to be valid. Let's use Little's own words for this discussion: "At a minimum, we must have conservation of flow. Thus, the average output or departure rate (TH) equals the average input or arrival rate (λ). Furthermore, we need to assume that all jobs that enter the shop will eventually be completed and will exit the shop; there are no jobs that get lost or never depart from the shop...we need the size of the WIP to be roughly the same at the beginning and end of the time interval so that there is neither significant growth nor decline in the size of the WIP, [and] we need some assurance that the average age or latency of the WIP is neither growing nor declining." (reference #2) "At a minimum, we must have conservation of flow." Allow me to put these in a bulleted list that will be easier for your reference later. For a system being observed for an arbitrarily long amount of time: Average arrival rate equals average departure rate All items that enter a workflow must exit WIP should neither be increasing nor decreasing Average age of WIP is neither increasing nor decreasing Consistent units must be used for all measures I added that last bullet point for clarity. It should make sense that if Cycle Time is measured in days, then Throughput cannot be measured in weeks. And don't even talk to me about story points. If you have a system that obeys all of these assumptions, then you have a system in which the TH form of Little's Law will apply. If you have a system that obeys all of these assumptions, then you have a system in which the TH form of Little's Law will apply. Wait, what's that you say? Your system doesn't follow these assumptions? I'm glad you pointed that out because that will be the topic of our next post. A note on stability Most people have an incorrect notion of what stability means. "Stable" does not necessarily mean "not changing." For example, Little explicitly states aspects of a system that L = λW is NOT dependent on and, therefore, may reasonably change over time: size of items, order of items worked on, number of servers, etc. That means situations like adding or removing team members over time may not be enough to consider to a process "unstable." However, to take an extreme example, it would be easy to see that all of the restrictions/changes imposed during the 2020 COVID pandemic would cause a system to be unstable. From a LL perspective, only when all 5 assumptions are met can a system reasonably be considered stable (assuming we are talking about the TH form of LL). References Whitt, W. 1991. A review of L = λW and extensions. Queueing Systems 9(3) 235–268. Little, J. D. C., S. C. Graves. 2008. Little's Law. D. Chhajed, T. J. Lowe, eds. Building Intuition: Insights from Basic Operations Management Models and Principles. Springer Science + Business Media LLC, New York. Explore all entries in this series When an Equation Isn't Equal A (Very) Brief History of Little's Law The Two Faces of Little's Law One Law. Two Equations (this article) It's Always the Assumptions The Most Important Metric of Little's Law Isn't In the Equation How NOT to use Little's Law Other Myths About Little's Law Little's Law - Why You Should Care About Daniel Vacanti, Guest Writer Daniel Vacanti is the author of the highly-praised books "When will it be done?" and "Actionable Agile Metrics for Predictability" and the original mind behind the ActionableAgile™️ Analytics Tool. Recently, he co-founded ProKanban.org, an inclusive community where everyone can learn about Professional Kanban, and he co-authored their Kanban Guide. When he is not playing tennis in the Florida sunshine or whisky tasting in Scotland, Daniel can be found speaking on the international conference circuit, teaching classes, and creating amazing content for people like us.

  • It's Always The Assumptions

    This is post 5 of 9 in our Little's Law series. Not to get too morbid, but in police detective work, when a married woman is murdered, there are only three rules to determine who the killer is: 1. It's always the husband 2. It's always the husband 3. It's always the husband The same thing is true when your flow metrics are murdered by your process: 1. It's always the assumptions 2. It's always the assumptions 3. It's always the assumptions Think back to the first experiment I had you run at the start of this blog series. I had you look at your data, do some calculations, and determine if you get the results that Little's Law predicts. I even showed you some example data of a real process where the calculated metrics did not yield a valid Little's Law result. I asked you at the time, "What's going on here?" If you've read my last post, then you now have the answer. The problem isn't Little's Law. The problem is your process. The Throughput form of Little's Law is based on five basic assumptions. Break any one or more of those assumptions at any one or more times, and the equation won't work. It's as simple as that. For convenience for the rest of this discussion, I'm going to re-list Little's assumptions for the Throughput form of his law. Also, for expediency, I am going to number them, though this numbering is arbitrary and is in no way meant to imply an order of importance (or anything else for that matter): 1. Average arrival rate equals average departure rate 2. All items that enter a workflow must exit 3. WIP should neither be increasing nor decreasing 4. Average age of WIP is neither increasing nor decreasing. 5. Consistent units must be used for all measures In that earlier post, I gave this example from a team that I had worked with (60 days of historical data): WIP: 19.54, TH: 1.15, CT: 10.3 For this data, WIP / TH is 16.99, not 10.3. What that tells us is that at one or more points during that 60-day time frame, this team violated one or more of Little's Law's assumptions at least one or more times. One of the first pieces of detective work is to determine which ones were violated and when. Almost always, a violation of Little's Law comes down to your process's policies (whether those policies are explicit or not). For example, does your process call for expedites that are allowed to violate WIP limits and that takes priority over other existing work? If so, for each expedited item you had during the 60 days, you violated at least assumptions #3 and #4. Did you have blockers that you ignored? If so, then you at least violated #4. Did you cancel work and just delete it off the board? If so, then you violated #2. And so on. This was quite possibly the easiest post to write in this series -- but probably the most important one. A very quick and easy health check is to compare your calculated flow metrics with those that are calculated by Little's Law. Are they different? If so, then somewhere, somehow, you have violated an assumption. Now your detective work begins. Do you have process policies that are in direct contradiction to Little's Law's assumptions? If so, what changes can you make to improve stability/predictability? Do you have more ad hoc policies that contradict Little? If so, how do you make them more explicit so the team knows how to respond in certain situations? The goal is not to get your process perfectly in line with Little. The goal is to have a framework for continual improvement. Little is an excellent jumping-off point for that. Speaking of continual improvement, when it comes to spotting improvement opportunities as soon as possible, there is one assumption above that is more important than all of the others. If you have followed my work up until now, then you know what that assumption is. If not, then read on to the next post... Explore all entries in this series When an Equation Isn't Equal A (Very) Brief History of Little's Law The Two Faces of Little's Law One Law. Two Equations It's Always the Assumptions (this article) The Most Important Metric of Little's Law Isn't In the Equation How NOT to use Little's Law Other Myths About Little's Law Little's Law - Why You Should Care About Daniel Vacanti, Guest Writer Daniel Vacanti is the author of the highly-praised books "When will it be done?" and "Actionable Agile Metrics for Predictability" and the original mind behind the ActionableAgile™️ Analytics Tool. Recently, he co-founded ProKanban.org, an inclusive community where everyone can learn about Professional Kanban, and he co-authored their Kanban Guide. When he is not playing tennis in the Florida sunshine or whisky tasting in Scotland, Daniel can be found speaking on the international conference circuit, teaching classes, and creating amazing content for people like us.

  • How NOT to use Little's Law

    This is post 7 of 9 in our Little's Law series. You may or may not be surprised to hear me say that the Little's Law equation is indeed deterministic. But, as I have mentioned several times in the past, it is not deterministic in the way that you think it is. That is, the law is concerned with looking backward over a time period that has already been completed. It is not about looking forward; that is, is not meant to be used to make deterministic predictions. As Dr. Little himself says about the law, "This is not all bad. It just says that we are in the measurement business, not the forecasting business". (1) In other words, the fundamental way to NOT use Little's Law is to use it to make a forecast. Let me explain, as this is a sticking point for many people (again, most interwebs blog posts get this wrong). The "law" part of Little's Law specifies an exact (deterministic) relationship between average WIP, average Cycle Time, and average Throughput, and this "law" part only applies only when you are looking back over historical data. The law is not about—and was never designed for—making deterministic forecasts about the future. Little's Law wasn't designed for making deterministic forecasts about the future. For example, let's assume a team that historically has had an average WIP of 20 work items, an average Cycle Time of 5 days, and an average Throughput of 4 items per day. You cannot say that you are going to increase average WIP to 40, keep average Cycle Time constant at 5 days, and magically, Throughput will increase to 8 items per day—even if you add staff to keep the WIP to staff ratio the same in the two instances. You cannot assume that Little's Law will make that prediction. It will not. All Little's Law will say is that an increase in average WIP will result in a change to one or both of average Cycle Time and average Throughput. It will further say that those changes will manifest themselves in ways such that the relationship among all three metrics will still obey that law. But what it does not say is that you can deterministically predict what those changes will be. You have to wait until the end of the time interval you are interested in and look back to apply the law. The reason for the above is because--as we saw in the last post--it is impossible to know which of Little's assumptions (or how many times) you will violate in the future. As a point of fact, any violation of the assumptions will invalidate the law (regardless of whether you are looking backward or forward). But that restriction is not fatal. The proper application of Little's Law in our world is to understand the assumptions of the law and to develop process policies that match those assumptions. If the process we operate conforms—or mostly conforms—to all of the assumptions of the law, then we get to a world where we can start to trust the data that we are collecting from our system. It is at this point that our process is probabilistically predictable. Once there, we can start to use something like Monte Carlo simulation on our historical data to make forecasts, and, more importantly, we can have some confidence in the results we get by using that method. There are other, more fundamental reasons why you do not want to use Little's Law to make forecasts. For one thing, I have hopefully by now beaten home the point that Little's Law is a relationship of averages. I mention this again because even if you could use Little's Law as a forecasting tool (which you cannot), you would not want to, as you would be producing a forecast based on averages. Anytime you hear the word "average," you must immediately think "Flaw of Averages" (2). As a quick reminder, the Flaw of Averages (crudely) states that "plans based on average assumptions will fail on average." So, if you were to forecast using LL, then you would only be right an average amount of the time (in other words, you would most likely be wrong just as often as you were right--that's not very predictable from a forecasting perspective). Plans based on average assumptions will fail on average Having said all that, though, there is no reason why you cannot use the law for quick, back-of-the-envelope type estimations about the future. Of course, you can do that. I would not, however, make any commitments, WIP control decisions, staff hiring or firing decisions, or project cost calculations based on this type of calculation alone. I would further say that it is negligent for someone even to suggest doing so. But this simple computation might be useful as a quick gut check to decide if something like a project is worth any further exploration. While using Little's Law to forecast is a big faux pas, there are other myths that surround it, which we will cover very quickly in the next post in the series. References Little, J. D. C. *Little's Law As Viewed on Its 50th Anniversary* https://people.cs.umass.edu/~emery/classes/cmpsci691st/readings/OS/Littles-Law-50-Years-Later.pdf Savage, Sam L. *The Flaw of Averages*. John Wiley & Sons, Inc., 2009. Vacanti, Daniel S. *Actionable Agile Metrics for Predictability* ActionableAgile Press, 2014. Explore all entries in this series When an Equation Isn't Equal A (Very) Brief History of Little's Law The Two Faces of Little's Law One Law. Two Equations It's Always the Assumptions The Most Important Metric of Little's Law Isn't In the Equation How NOT to use Little's Law (this article) Other Myths About Little's Law Little's Law - Why You Should Care About Daniel Vacanti, Guest Writer Daniel Vacanti is the author of the highly-praised books "When will it be done?" and "Actionable Agile Metrics for Predictability" and the original mind behind the ActionableAgile™️ Analytics Tool. Recently, he co-founded ProKanban.org, an inclusive community where everyone can learn about Professional Kanban, and he co-authored their Kanban Guide. When he is not playing tennis in the Florida sunshine or whisky tasting in Scotland, Daniel can be found speaking on the international conference circuit, teaching classes, and creating amazing content for people like us.

  • The Most Important Metric of Little's Law Isn't In The Equation

    This is post 6 of 9 in our Little's Law series. As we discussed in the previous post, a thorough understanding of what it means to violate each of the assumptions of Little's Law (LL) is key to the optimization of your delivery process. So let's take a minute to walk through each of those in a bit more detail. The first thing to observe about the assumptions is that #1 and #3 are logically equivalent. I'm not sure why Dr. Little calls these out separately because I've never seen a case where one is fulfilled but the other is not. Therefore, I think we can safely treat those two as the same. But more importantly, you'll notice what Little is not saying here with either #1 or #3. He is making no judgment about the actual amount of WIP that is required to be in the system. He says nothing of less WIP being better or more WIP being worse. In fact, Little couldn't care less. All he cares about is that WIP is stable over time. So while having arrivals match departures (and thus unchanging WIP over time) is important, that tells us *nothing* about whether we have too much WIP, too little WIP, or just the right amount of WIP. Assumptions #1 and #3, therefore, while important, can be ruled out as *the* most important. Assumption #2 is one that is frequently ignored. In your work, how often do you start something but never complete it? My guess is the number of times that has happened to you over the past few months is something greater than zero. Even so, while this assumption is again of crucial importance, it is usually the exception rather than the rule. Unless you find yourself in a context where you are always abandoning more work than you complete (in which case you have much bigger problems than LL), this assumption will also not be the dominant reason why you have a suboptimal workflow. This leaves us with assumption #4. Allowing items to age arbitrarily is the single greatest factor as to why you are not efficient, effective, or predictable at delivering customer value. Stated a different way, if you plan to adopt the use of flow metrics, the single most important aspect that you should be paying attention to is not letting work items age unnecessarily! More than limiting WIP, more than visualizing work, more than finding bottlenecks (which is not necessarily a flow thing anyway), the only question to ask of your flow system is, "Are you letting items age needlessly?" Get aging right and most of the rest of predictability takes care of itself. As this is a blog series about Little's Law, getting into the specifics of how to manage item aging is a bit beyond our remit, but thankfully Julia Wester has done an excellent job of giving us an intro to how you might use ActionableAgile Analytics for this goal. To me, one of the strangest results in all of flow theory is that the most important metric to measure is not really stated in any equation (much less Little's Law). While I always had an intuition that aging was important, I never really understood its relevance. It wasn't until I went back and read the original proofs and subsequent articles by Little and others that I grasped its significance. You'll note that other than the Kanban Guide, all other flow-based frameworks do not even mention work item aging at all. Kinda makes you wonder, doesn't it? Having now explored the real reasons to understand Little's Law (e.g., pay attention to aging and understand all the assumptions), let's now turn our attention to some ways in which Little's Law should NOT be used. Explore all entries in this series When an Equation Isn't Equal A (Very) Brief History of Little's Law The Two Faces of Little's Law One Law. Two Equations It's Always the Assumptions The Most Important Metric of Little's Law Isn't In the Equation (this article) How NOT to use Little's Law Other Myths About Little's Law Little's Law - Why You Should Care About Daniel Vacanti, Guest Writer Daniel Vacanti is the author of the highly-praised books "When will it be done?" and "Actionable Agile Metrics for Predictability" and the original mind behind the ActionableAgile™️ Analytics Tool. Recently, he co-founded ProKanban.org, an inclusive community where everyone can learn about Professional Kanban, and he co-authored their Kanban Guide. When he is not playing tennis in the Florida sunshine or whisky tasting in Scotland, Daniel can be found speaking on the international conference circuit, teaching classes, and creating amazing content for people like us.

  • Other Myths About Little's Law

    This is post 8 of 9 in our Little's Law series. In the previous blog post, we talked about the single biggest error people make when applying Little's Law. That's not to say there aren't others out there. Thankfully, Prateek Singh and I recorded an episode of our Drunk Agile podcast to go over some of these other myths in more detail. While a large portion of what we talk about below is a rehash of the forecasting debacle, we also get into lesser-known problems like: 1. Using LL to set WIP Limits 2. "Proving" LL using Cumulative Flow Diagrams 3. All items need to be the same size 4. Cycle Times must be normally distributed 5. FIFO queuing is required BTW, you will recall from a previous post where I quoted Little as saying, "...but it is quite surprising what we do not require. We have not mentioned how many servers there are, whether each server has its own queue or a single queue feeds all servers, what the service time distributions are, what the distribution of inter-arrival times is, or what is the order of service of items, etc." (1). If Little himself says that these are myths, who are we to disagree? So grab your favourite whisky and enjoy! References Little, J. D. C., S. C. Graves. 2008. Little's Law. D. Chhajed, T. J. Lowe, eds. Building Intuition: Insights from Basic Operations Management Models and Principles. Springer Science + Business Media LLC, New York. Drunk Agile YouTube channel https://www.youtube.com/@drunkagile4780 Explore all entries in this series When an Equation Isn't Equal A (Very) Brief History of Little's Law The Two Faces of Little's Law One Law. Two Equations It's Always the Assumptions The Most Important Metric of Little's Law Isn't In the Equation How NOT to use Little's Law Other Myths About Little's Law (this article) Little's Law - Why You Should Care About Daniel Vacanti, Guest Writer Daniel Vacanti is the author of the highly-praised books "When will it be done?" and "Actionable Agile Metrics for Predictability" and the original mind behind the ActionableAgile™️ Analytics Tool. Recently, he co-founded ProKanban.org, an inclusive community where everyone can learn about Professional Kanban, and he co-authored their Kanban Guide. When he is not playing tennis in the Florida sunshine or whisky tasting in Scotland, Daniel can be found speaking on the international conference circuit, teaching classes, and creating amazing content for people like us.

  • Little's Law - Why You Should Care

    This is post 9 of 9 in our Little's Law series. I personally can't fathom how someone could call themselves a flow practitioner without a concerted effort to study Little's Law. However, the truth is that some of the posts in this series have gone into way more detail about LL than most people would ever need to practically know. Having said that, without an understanding of what makes Little's Law work, teams are making decisions every day that are in direct contravention of established mathematical facts (and paying the consequences). To that end, for those who want to learn more, here is my suggested reading list for anyone interested in learning more about Little's Law (in this particular order): 1. http://web.eng.ucsd.edu/~massimo/ECE158A/Handouts_files/Little.pdf Frank Vega and I call this "Little's Law Chapter 5" as it is a chapter taken from a textbook that Little contributed to. For me, this is hands down the best introduction to the law in its various forms. I am not lying when I say that I've read this paper 50 times (and probably closer to 100 times) and get something new from it with each sitting. 2. https://people.cs.umass.edu/~emery/classes/cmpsci691st/readings/OS/Littles-Law-50-Years-Later.pdf This is a paper Little wrote on the 50th anniversary of the law. It builds on the concepts of Chapter 5 and goes into more detail about the history of L=λW since its first publication in 1961. This paper, along with Chapter 5, should tell you 95% of what you need to know about LL. 3. http://fisherp.scripts.mit.edu/wordpress/wp-content/uploads/2015/11/ContentServer.pdf Speaking of the first publication of the proof of L=λW, there's no better teacher than going right to the source. This article would be my 3rd recommendation as it is a bit mathy, but its publication is one of the seminal moments in the history of queuing theory and any buff should be familiar with this proof. For extra credit: 4. http://www.columbia.edu/~ww2040/ReviewLlamW91.pdf This article is not for the faint of heart. I recommend it not only for its comprehensive review of L=λW but also (and mostly) for its exhaustive reference list. Work your way through all of the articles listed at the end of this paper, and you can truly call yourself an expert on Little's Law. If you read all of these, then you can pretty much ignore any other blog or LinkedIn post (or Wikipedia article, for that matter) that references Little's Law. Regardless of the effort that you put in, however, expertise in LL is not the end goal. No, the end goal is altogether different. Why You Really Should Care If you are studying Little's Law, it is probably because you care about process improvement. Chances are the area of process improvement that you care most about is predictability. Remember that being predictable is not completely about making forecasts. The bigger part of predictability is operating a system that behaves in a way that we expect it to. By designing and operating a system that follows the assumptions set forth by Little's Law, we will get just that: a process that behaves the way we expect it to. That means we will have controlled the things that we can control and that the interventions that we take to make things better will result in outcomes more closely aligned with our expectations. That is to say, if you know how Little's Law works, then you know how flow works. And if you know how flow works, then you know how value delivery works. I hope you have enjoyed this series and would welcome any comments or feedback you may have. Thanks for going on this learning journey with me! Explore all entries in this series When an Equation Isn't Equal A (Very) Brief History of Little's Law The Two Faces of Little's Law One Law. Two Equations It's Always the Assumptions The Most Important Metric of Little's Law Isn't In the Equation How NOT to use Little's Law Other Myths About Little's Law Little's Law - Why You Should Care (this article) About Daniel Vacanti, Guest Writer Daniel Vacanti is the author of the highly-praised books "When will it be done?" and "Actionable Agile Metrics for Predictability" and the original mind behind the ActionableAgile™️ Analytics Tool. Recently, he co-founded ProKanban.org, an inclusive community where everyone can learn about Professional Kanban, and he co-authored their Kanban Guide. When he is not playing tennis in the Florida sunshine or whisky tasting in Scotland, Daniel can be found speaking on the international conference circuit, teaching classes, and creating amazing content for people like us.

  • What's the Tallest Mountain On Earth?

    If, like most everyone else, you answered, "Mount Everest," then you are not quite right. But you are not quite wrong, either. The real answer has to do with a concept I wrote about in an earlier blog post. Scientists can all objectively agree where mountains "finish". That is, it's extremely hard to argue about where a mountain "peaks". But when measuring, we know that "finished" is only half the battle. Agreeing where a mountain "starts" is a whole other conversation altogether -- and not nearly as straightforward as it may sound. For example, more than half of the Mauna Kea volcano in Hawaii is underwater. Only 4,205 meters of the whole mountain is above sea level. But if we measure from the base to the summit of Mauna Kea, it is 10,211 meters -- that's about 20% taller than Everest's 8,848 meters. If you only want to talk about mountains on land, then, base-to-summit, Denali in Alaska is actually taller (5,900m) than Everest base-to-summit (4,650m). So why does Everest get the crown? The reason is that most scientists choose to start their measurements of mountain heights from a concept known as sea level. But the problem with sea level is that anyone who has studied geography knows that the sea ain't so level. The physics of the earth are such that different densities of the earth's makeup at different locations cause different gravitational pulls resulting in "hills and valleys" of sea level across the planet (the European Space Agency has an outstanding visualization of this) Add to that things like tides, storms, wind, and a bulge around the equator due to the earth's rotation means there is no one true level for the sea. Scientists cheat to solve this problem by calculating a "mean" (arithmetic mean or average) sea level. This "average" sea level represents the zero starting point at which all land mountains are measured (cue the "Flaw of Averages"). You might ask, why don't we choose a more rigorous starting point like the center of the earth? The reason for that is... remember that bulge around the equator that I just alluded to? The earth itself is not quite spherical, and the distance from its center at the equator is longer than the distance from the center to either the north or south pole. In case you were wondering, if we were to measure from the center of the earth, then Mount Chimborazo in Ecuador would win. It seems that geologists fall prey to the same syndrome that afflicts most Agile methodologies. A bias toward defining only when something is "done" ignores half of the equation -- and the crucial half at that. What's more, you have Agilists out there who actively rant against any notion of a defined "start" or "ready". What I hope to have proven here is that, in many instances, deciding where to start can be a much more difficult (and usually much more important) problem to solve, depending on what question you are trying to solve. At the risk of repeating myself, a metric is a measurement, and any measurement contains BOTH a start point AND a finish point. Therefore, begin your flow data journey by defining the start and end points in your process. Then consider updating those definitions as you collect data and as your understanding of your context evolves. Anything else is just theatre. References PBS.org, "Be Smart", Season 10, Episode 9, 08/10/2022 The European Space Agency, https://www.esa.int/ About Daniel Vacanti, Guest Writer Daniel Vacanti is the author of the highly-praised books "When will it be done?" and "Actionable Agile Metrics for Predictability" and the original mind behind the ActionableAgile™️ Analytics Tool. Recently, he co-founded ProKanban.org, an inclusive community where everyone can learn about Professional Kanban, and he co-authored their Kanban Guide. When he is not playing tennis in the Florida sunshine or whisky tasting in Scotland, Daniel can be found speaking on the international conference circuit, teaching classes, and creating amazing content for people like us.

  • Applying Flow Metrics for Scrum

    Are you using ActionableAgile™ in a Scrum context? Well, good news! Our friends at ProKanban.org have just published a class called "Applying Flow Metrics for Scrum." While this class is technically tool agnostic, you will learn much about how to get the most out of ActionableAgile™ while using Scrum. To learn more, please visit https://prokanban.org/applying-flow-metrics-for-scrum/ Happy Forecasting! About Daniel Vacanti, Guest Writer Daniel Vacanti is the author of the highly-praised books "When will it be done?" and "Actionable Agile Metrics for Predictability" and the original mind behind the ActionableAgile™️ Analytics Tool. Recently, he co-founded ProKanban.org, an inclusive community where everyone can learn about Professional Kanban, and he co-authored their Kanban Guide. When he is not playing tennis in the Florida sunshine or whisky tasting in Scotland, Daniel can be found speaking on the international conference circuit, teaching classes, and creating amazing content for people like us.

  • All Models Are Wrong. Some Are Random.

    Disclaimer: This post is for those who really like to geek out on the inner workings of Monte Carlo Simulations. If you are not interested in the inner workings of these simulations, hopefully, you will find our other blog posts more to your liking! Have you ever wondered why we choose to implement Monte Carlo Simulations (MCS) the way we do in ActionableAgile™️ Analytics (AA)? Before we get too deep into answering that question, it is worthwhile to first take a step back and talk about the one big assumption all Monte Carlo Simulations in AA make and that is that the future we are trying to predict roughly looks like the past we have data for. For example, in North America, is it reasonable to use December's data to forecast what can be done in January? Maybe not. In Europe, can we use August's data to predict what can be done in September? Again, probably not. The trick, then, is to find a time period in the past that we believe will accurately reflect what will happen in the future we want to forecast. If you don't account for this assumption, then any Monte Carlo Simulation you run will be invalid. The big assumption: The future we are trying to predict roughly looks like the past we have data for. Let's say we do account for this assumption, and we have a set of historical data that we are confident to plug into our simulation. The way AA works, then, is to say that ANY day in the past data can look like ANY day in the future that we are trying to forecast. So, we randomly sample data from a day in the past (we treat each day in the past as equally likely) and assign that data value to a day in the future. We do this sampling thousands of times to understand the risk associated with all the outcomes that show up in our MCS results. Each day in the past is treated as equally likely (to happen in the future.) But let's think about this for a second. We are assigning a random day in the past to a random day in the future. Doesn't that violate our big assumption that we just talked about? In other words, if any day from the past can look like any day in the future, then we could presumably (and almost certainly do) use data from a past Monday and assign it to a future Saturday. Or we use data from a past Sunday and assign it to a future Wednesday. Surely, Mondays in the past don't look like Saturdays in the future, and Sundays in the past don't look like Wednesdays in the future, right? Doesn't this mean that we should refine our sampling algorithm and make it a bit more sophisticated in order to eliminate these obvious mistakes? I.e., shouldn't we have an algorithm that only assigns past Mondays to future Mondays or past Sundays to future Sundays? Or even just assign past weekdays to future weekdays and past weekends to future weekends? Well, Prateek Singh did just that when he tried different sampling algorithms for different simulations, and the results may surprise you. I highly encourage you to read his blog here as it is the more scientific justification for why we use the sampling algorithm that we do in AA. I don't want to ruin the surprise for you but (spoiler alert) with AA, we chose the best one. Happy Forecasting! P.S. For a much more robust treatment of the actual MCS algorithm, please see my book "When Will It Be Done?" or my self-paced video class on Metrics and Forecasting in the 55 Degrees Community. About Daniel Vacanti, Guest Writer Daniel Vacanti is the author of the highly-praised books "When will it be done?" and "Actionable Agile Metrics for Predictability" and the original mind behind the ActionableAgile™️ Analytics Tool. Recently, he co-founded ProKanban.org, an inclusive community where everyone can learn about Professional Kanban, and he co-authored their Kanban Guide. When he is not playing tennis in the Florida sunshine or whisky tasting in Scotland, Daniel can be found speaking on the international conference circuit, teaching classes, and creating amazing content for people like us.

  • In God We Trust. All Others Bring Data.

    Before proceeding, it would be worth reviewing Julia's excellent posts on the four basic metrics of Flow: Work Item Age Cycle Time Throughput WIP The definitions are great but are, unfortunately, meaningless unless we know what data we need to capture to calculate them. In terms of data collection, this is where our harping on you to define started and finished points will finally pay off. Take a timestamp when a work item crosses your started point and take another timestamp when that same work item crosses your finished point. Do that for every work item that flows through your process as shown below (forgive the American-style dates): That's it. To calculate all the basic flow metrics, this is the only data you will need. To calculate any or all of the basic metrics of flow, the only data you need is the timestamp for when an item started and the timestamp for when an item finished. Even better, if you are using some type of work item tracking tool to help your team, then most likely your tool will already be collecting all of this data for you. The downside of using a tracking tool, though, is that you may not be able to rely on any out-of-the-box metrics calculations that it may give you. It is one of the great secrets of the universe as to why many Agile tools cannot calculate flow metrics properly, but, for the most part, they cannot. Luckily for you, that's what this blog post is all about. To properly calculate each of the metrics from the data, do as follows: WIP WIP is the count of all work items that have a started timestamp but not a finished timestamp for a given time period. That last part is a bit difficult for people to grasp. Although technically, WIP is an instantaneous metric--that is, at any time you could count all of the work items in your process to calculate WIP--it is usually more helpful to talk about WIP over some time unit: days, weeks, Sprints, etc. Our strong recommendation--and this is going to be our strong recommendation for all of these metrics--is that you track WIP per day. Thus, if we would want to know what our WIP was for a given day, we would just count all the work items that had started but not finished by that date. For the above picture, our WIP on January 5th is 3 (work items 3, 4, and 5 have all started before January 5th but have not been finished by that day). Cycle Time Cycle Time equals the finished date minus the started date plus one (CT = FD - SD + 1). If you are wondering where the “+ 1” comes from in the calculation, it is because we count every day in which the item is worked as part of the total. For example, when a work item starts and finishes on the same day, we would never say that it took zero time to complete. So we add one, effectively rounding the partial day up to a full day. What about items that don't start and finish on the same day? For example, let's say an item starts on January 1st and finishes on January 2nd. The above Cycle Time definition would give an answer of two days (2 – 1 + 1 = 2). We think this is a reasonable, realistic outcome. Again, from the customers' perspective, if we communicate a Cycle Time of one day, then they could have a realistic expectation that they will receive their item on the same day. If we tell them two days, they have a realistic expectation that they will receive their item on the next day, etc. You might be concerned that the above Cycle Time calculation might be too biased toward measuring Cycle Time in terms of days. In reality, you can substitute whatever notion of "time" that is relevant for your context (that is why up until now, we have kept saying track a "timestamp" and not a "date"). Maybe weeks are more relevant for your specific situation. Or hours. Or even Sprints. [For Scrum, if you wanted to measure Cycle Time in terms of Sprints, then the calculation would just be Finished Sprint – Start Sprint + 1 (assuming work items cross Sprint boundaries in your context).] The point here is that this calculation is valid for all contexts. However, as with WIP, our very strong recommendation is to calculate Cycle Time in terms of days. The reasons are too numerous to get into here, so when starting out, calculate Cycle Time in terms of days and then experiment with other time units later should you feel you need them (our guess is you won't). Work Item Age Work Item Age equals the current date minus the started date plus one (Age = CD - SD + 1). The "plus one" argument is the same as for Cycle Time above. Our apologies, but you will never have a work item that has an Age of zero days. Again, our strong recommendation is to track Age in days. Throughput Let's take a look at a different set of data to make our Throughput calculation example a bit clearer: To calculate Throughput, begin by noting the earliest date that any item was completed, and the latest date that any item was completed. Then enumerate those dates. In our example, those dates in sequence are: Now for each enumerated date, simply count the number of items that finished on that exact date. For our data, those counts look like this: From Figure 2.4, we can see that we had a Throughput of 1 item on 03/01/2016, 0 items the next day, 2 items the third day, and 2 items the last day. Note the Throughput of zero on 03/02/2016 --nothing was finished that day. As stated above, you can choose whatever time units you want to calculate Throughput. If you are using Scrum, your first inclination might be to calculate Throughput per Sprint: "we got 14 work items done in the last Sprint". Let us very strongly advise against that and advise very strongly that you measure Throughput in terms of days. Again, it would be a book in itself to explain why, but let us just offer two quick justifications: (1) using days will provide you much better flexibility and granularity when we start doing things like Monte Carlo simulation; and, (2) using consistent units across all of your metrics will save you a lot of headaches. So if you are tracking WIP, Cycle Time, and Age all in days, then you will make your life a whole lot simpler if you track Throughput in days too. For Scrum, you can easily derive Throughput per Sprint from this same data if that still matters to you. Randomness We've saved the most difficult part for last. You now know how to calculate the four basic metrics of flow at the individual work item level. Further, we now know that all of these calculations are deterministic. That is, if we start a work item on Monday and finish it a few days later on Thursday, then we know that the work item had a Cycle Time of *exactly* four days. But what if someone asks us what our overall process Cycle Time is? What our overall process Throughput is? How do we answer those questions? Our guess is you immediately see the problem here. If, say, we look at our team's Cycle Time for the past six weeks, we will see that we had work items finish in a wide range of times. Some in one day, some in five days, some in more than 14 days, etc. In short, there is no single deterministic answer to the question, "What is our process Cycle Time?". Stated slightly differently, your process Cycle Time is not a unique number, rather, it is a distribution of possible values. That's because your process Cycle Time is really what's known as a random variable. [By the way, we've only been talking about Cycle Time in this section for illustrative purposes, but each of the basic metrics of flow (WIP, Cycle Time, Age, Throughput) are random variables.] What random variables are and why you should care is one of those topics that is way beyond the scope of this post. But what you do need to know is that your process is dominated by uncertainty and risk, which means all flow metrics that you track will reflect that uncertainty and risk. Further, that uncertainty and risk will show up as randomness in all of your Flow Metric calculations. How variation impacts the interpretation of flow metrics and how it impacts any action that should be taken to improve your process will be the topic of a blog series coming later this year. For now, what you need to know is that the randomness in your process is what makes it stochastic. You don't necessarily need to understand what stochastic means, but you should understand that all stochastic processes behave according to certain "laws". One such law you may have heard of before... About Daniel Vacanti, Guest Writer Daniel Vacanti is the author of the highly-praised books "When will it be done?" and "Actionable Agile Metrics for Predictability" and the original mind behind the ActionableAgile™️ Analytics Tool. Recently, he co-founded ProKanban.org, an inclusive community where everyone can learn about Professional Kanban, and he co-authored their Kanban Guide. When he is not playing tennis in the Florida sunshine or whisky tasting in Scotland, Daniel can be found speaking on the international conference circuit, teaching classes, and creating amazing content for people like us.

  • The Four-Letter Word That Begins With F

    Many of our future planned posts will refer to a concept known as Flow. For as much as Flow is talked about in Lean-Agile circles, there really aren't many reliable definitions for what Flow actually is. Our inspiration for the definition we will use going forward is the definition of Flow that is found in the Kanban Guide (and by inspiration, I mean the document that we will shamelessly steal from). What Is Flow? The whole reason for the existence of your current job (team) is to deliver value for your customers/stakeholders. Value, however, doesn't just magically appear. Constant work must be done to turn product ideas into tangible customer value. The sum total of the activities needed to turn an idea into something concrete is called a process. Whether you know it or not, you and your team have built a value delivery process. That process may be explicit or implicit, but it exists. Having an understanding of your process is fundamental to the understanding of Flow. Once your process is established, then Flow is simply defined as the movement of potential value through that process. Flow: the movement of potential value through a given process. Flow: the movement of potential value through a give process. Maybe you've heard of the other name for process, known as workflow. There is a reason it is called workFLOW. Because for any process, what really matters is the flow of work. Note: In future posts, I will often use the words "process", "workflow", and "system" interchangeably. I will try my best to indicate a difference between these when a difference is warranted. For most contexts, however, any difference among these words is negligible so that they can easily be used synonymously. The reason you should care about Flow is because your ability to achieve Flow in your process will dictate how effective, efficient, and predictable you are as a team at delivering customer value--which, as we stated at the beginning, is the whole reason you are here. Setting Up To Measure Flow As important as Flow is as a concept, it can really only act as a guide for improvements if you can measure it. Thankfully for us (and thankfully for ActionableAgile™️), Flow comes with a set of basic metrics that will give us such insight. But before we can talk about what metrics to use, we need first talk about what must be in place in order to calculate those metrics. All metrics are measurements, and all measurements have the same two things in common: a start point and an end point. Measuring Flow is no different. To measure flow, we must know what it means for work to have started in our process and what it means for work to have finished in our process. The decision around started and finished may seem trivial, but we can assure you it is not. How to set started and finished points in your process is beyond the scope of this book, but here are some decent references to check out if you need some help. It gets a little more complicated than that because it is perfectly allowed in Flow to have more than one started point and more than one finished point within a given workflow. Maybe you want to measure both from when a customer asks for an item as well as from when the team starts working on the item. Or maybe a team considers an item finished when it has been reviewed by Product Management, put into production, validated by the end user, or whatever. Any and all permutations of started and finished in your process are allowed. Not only are the different permutations allowed, it is encouraged that you experiment with different started and finished points in your process to better understand your context. You will quickly learn that changing the definition of started/finished will allow you to answer very different questions about flow in your process. If all goes well, expanding your started/finished points will get you down the path toward true business agility. The point is--as you will see--that the questions that Little's Law will help you answer will depend completely on your choices for started and finished. Conclusion Assuming you care about optimizing the value-delivery capabilities of your process, then you should care about Flow. And it should be pointed out that it doesn't matter if you are using Scrum, SAFe, Kanban, or something else for value delivery--you should still care about Flow. Therefore, if you haven't already, you need to sit down and decide--for your process--what does it mean for work to have started and what does it mean for work to have finished. All other Flow conversations will be dependent on those boundary decisions. Once defined, the movement of potential value from your defined started and finished points is what is called Flow. The concept of movement is of crucial importance because the last thing we want as a team is to start a whole bunch of work that never gets finished. That is the antithesis of value delivery. What's more, as we do our work, our customers are constantly going to be asking (whether we like it or not) questions like "how long?" or "how many?"--questions that will require our understanding of movement to answer. That's where Flow Metrics come in... About Daniel Vacanti, Guest Writer Daniel Vacanti is the author of the highly-praised books "When will it be done?" and "Actionable Agile Metrics for Predictability" and the original mind behind the ActionableAgile™️ Analytics Tool. Recently, he co-founded ProKanban.org, an inclusive community where everyone can learn about Professional Kanban, and he co-authored their Kanban Guide. When he is not playing tennis in the Florida sunshine or whisky tasting in Scotland, Daniel can be found speaking on the international conference circuit, teaching classes, and creating amazing content for people like us.

  • Want to succeed? Start by accepting uncertainty.

    In business, the quest for predictability is universal. We all want to grab hold of the reality we face everyday and, somehow, bend it to our will. When we are surprised by the unexpected, we often assume that we have failed in some way. We have this underlying belief that if we just do our job well enough, we can prevent any and all surprises and that success will follow. Unfortunately, that’s nothing more than a nice fairy tale. In real life, we have no hope of overcoming all uncertainty — zero. Instead, we must begin to accept it and learn how to operate, even thrive, within it. But we can’t do any of that if we don’t try to understand it. Stephen Bungay, the author of The Art of Action, helps us understand the shape of our uncertainty by expressing it via something he calls “the three gaps.” These gaps are places where uncertainty shows up: The Knowledge Gap: the difference between what we’d like to know and what we actually know. This gap occurs when you’re trying to plan, but often only manifests when you are trying to execute the plan. Often, we try to combat this gap, not by doing something different than before, but by doubling down on what we’ve already done. In other words, we just didn’t do it well enough the first time. So, instead of accepting that we may never know everything we’ll need to know up front, we double down on detailed plans and estimates. The Alignment Gap: the difference between what we want people to do and what they actually do. This gap occurs during execution. Like with the knowledge gap, we try to fix it by doubling down. In this case we double down on providing more detailed instructions and requirements. We are quite arrogant in our thinking and believe that if we can just be more thoughtful and more detailed, we can prevent all surprises. The Effects Gap: the difference between what we expect our actions to achieve and what they actually achieve. This gap occurs during verification. We don’t often consider that, in a complex environment, you can do the same thing over and over and get different outcomes despite your best efforts. Instead, we think we just didn’t have enough controls. We are stubborn to the point of stupidness and continue to think that we can manage our way to uncertainty. A most excellent example of how we like to try to overcome the gaps! by Jim Benson The ugly truth By reinforcing the idea that you can control your way to certainty, you aren’t teaching people how to be resilient and how to operate despite what comes their way. This means that when surprises do sneak through, people will be woefully unprepared and, more often than not, efforts will start to veer towards blaming the responsible party instead of figuring out a way forward. The ugly truth that we all must face is that, in complex environments like software development, healthcare, social work, product development, marketing, and more — we will never defeat uncertainty. To be honest, we wouldn’t like what would happen if we did. It would be the end of learning and innovation. So, what now then? While we have to accept that some uncertainty will always remain, we can try to tackle the low hanging fruit. For instance, we don’t abandon all research or planning. We just accept that things may not always go to plan and have an idea of how we’d react when uncertainty pops up. When I managed the web development team for NBA.com, we would run drills for our major events like the Draft and walk through scenarios like “What happens if a team drafts someone we don’t have a bio for?” and “What will we do if our stats engine breaks down?”We accepted that because we can’t control everything, the skill that we really need to survive in business is resiliency. We needed to learn to anticipate, react and recover. We learned how to think about resiliency and build it into our work processes, not just our technical systems. So, if you are finally getting to the point of accepting that you can’t conquer uncertainty, the next mission is to begin to build the skills of resiliency. There is no comprehensive list of ways to become resilient but I’ll share a few things I use while working in an uncertain environment. The Agile Manifesto The Agile manifesto is an excellent embrace of uncertainty and a pushback against our natural tendencies when reacting to Bungay’s three gaps. While there is a place for plans, documentation, contracts, and processes they are not the only, or even most important, things we need to excel in uncertain environments. The Scrum Framework One of the biggest benefits to the scrum framework is that sprints act as a forcing function to work in small batches. If you work in a smaller batch you notice the gaps more quickly and, if you fall prey to those natural tendencies to double down on instruction and planning, you’ll do so in a smaller way and, hopefully, learn more before the next piece of work starts. This is a perfect example of accepting uncertainty and trying to limit the potential damage. Kanban and Limiting Work-In-Progress Adopting Kanban forces you to limit the amount of work going on at one time. This has a similar benefit to the Scrum framework, but at an even more granular level. While Scrum limits how much you start in a Sprint, Kanban limits how much you have in progress at any one time. Thinking of your work-in-progress in economic terms can really help you understand the value of limiting it. My friend and generally awesome person, Cat Swetel, once said that you can think of your work falling into three buckets: Options  -  work not started Liabilities  - work in progress Assets  - work already finished It is in our liabilities that we are subject to the effects of uncertainty. If we limit the potential impact to a manageable amount, we limit the possible damage and, more often than not, we turn liabilities into assets faster. Probabilistic forecasting Often, even though we know there are many potential outcomes, we still provide a single forecast. A better way, that visualizes existing uncertainty, is to give forecasts AND state the likelihood that a particular forecast will occur. You’re very familiar with this whether you realize it or not. Every weather forecast you’ve seen uses this approach. Doing this is easy. You can use your historical data to forecast probable outcomes with cycle time scatterplots (for single items) or monte carlo simulations (for a group of items). Wrapping it up No matter what you choose to do going forward, by far the most important choice you can make is to accept the inevitability of uncertainty and to commit to learning how to thrive in the face of it. Sharing stories of your successes and failures helps both yourself and those who hear or read them to widen their perspectives. Having to tell the story makes you synthesize the information and make conclusions so that you understand what happened enough to tell the tale. And, while your context will not likely perfectly match that of your readers or listeners, it may provide them perspective and information that they can incorporate into their hypotheses.

bottom of page