Search Results
59 items found for ""
- Velocity alone doesn’t measure success
Neither does Throughput Many scrum teams use a metric called velocity as their primary metric. Velocity is commonly defined as the number of story points you finish over time, usually measured by sprints. The input of story points are assigned by scrum teams, often using the Fibonacci sequence, to pieces of work based on their relative complexity. A simple piece of work might be assigned 1 story point while a really complex piece of work could be assigned 13 points or more. A team’s velocity would represent how much complexity reached their definition of done in a given sprint. Measured from sprint to sprint, it becomes a sort of gauge of success. If it stays the same or goes up, that’s great. If it goes down repeatedly, there’s a problem that needs to be investigated. Teams using Throughput, another rate metric that counts the number of individual work items delivered instead of relative complexity, do the same. They measure throughput from across time periods, often sprints, and use it to determine their success is increasing, staying stable, or waning. No single metric defines success Unfortunately, success is more complicated than that and it can rarely if ever, be measured with a single metric. Most teams and organizations don’t just want to deliver things. They want to deliver good quality things and deliver them quickly on top of regularly delivering things. If you don’t see a distinction, let me share the story of a team I started coaching in 2019. This HR team came to me with a (perceived) Scrum process in place. They were using 2-week sprints for planning and retrospecting. But, they were still completely overwhelmed and unhappy. Though it could be higher, their throughput at least looked pretty consistent. To dig deeper, we looked at their Scrum board. The good news is that they understood their workflow. Their board had columns representing their entire workflow from planning to delivery. The bad news? It was chock full of work items. They were close to the end of their sprint yet many items were still in the sprint backlog. Despite the obvious fact that many of the items planned for the current sprint would not be finished by the beginning of their next sprint, they had already filled up the planning column for their upcoming sprint. Looking from multiple angles gives you a well-rounded picture In the end, it was clear that they suffered from a “too much WIP” (work-in-progress) problem. They started so much work that everything took much longer to finish than it would have needed if they had just started less work at one time. This was something we noticed almost immediately when we looked at a metric called Cycle Time. This is a measure of how long it takes a piece of work to be completed (you can define your start- and end-points for this metric). Using a chart called a Cycle Time Scatterplot, we could see that 85% of their work took up to 30 days to complete, while the remaining 15 took even longer! Now, remember their sprints were 14 days long. Houston, we have a problem! This team had unfortunately settled into a pattern of delivering a consistent number of work items — but they were very old work items. They hardly ever finished work items within the same sprint they were started in. By adding the Cycle Time metric to our arsenal we were able to get more insight into what was happening, find a problem, and start experimenting with solutions. In Summary The moral of the story is that one metric can be misleading at best, but it can be immensely harmful at worst. If you were to optimize for one metric without keeping an eye out for unintended consequences, you could end up causing bigger problems than you had when you started. When measuring team success, try to consider multiple aspects of success and establishing measures to represent each. Then look at them together so if you see one improve, you can make sure a negative impact didn’t happen elsewhere. For the curious, here’s what came next for our HR team: We identified 3 major things that would help this team improve their Cycle Time metric, and maybe even their Throughput metric too: Break work items down into smaller valuable pieces. Many of the items took so long because they simply had too many deliverables in them. Many process dysfunctions can be helped by breaking work down. Remove board columns that weren’t necessary. Every board column is a place to store more work-in-progress. There’s always a balance to be struck but it is ok to get rid of columns if they cause more pain than gain. This also meant we talked about not planning too early. Implement WIP limits. Using Velocity or Throughput as a planning tool for a sprint provides a WIP limit for your sprint backlog in that you limit the number of items you allow into a sprint. However, in a sprint, limiting the number of items you start at one time can often cause items to finish earlier and at a steadier pace throughout the sprint.
- Monte Carlo Simulations and Forecasting
When you hear Monte Carlo you probably have thoughts of the Formula One Grand Prix and extravagant casinos. At 55 Degrees, when we talk about Monte Carlo Simulations we are talking about forecasting. 🤓 What is a Monte Carlo Simulation? Here's a great definition from Investopedia Monte Carlo simulations are used to model the probability of different outcomes in a process that cannot easily be predicted due to the intervention of random variables. It is a technique used to understand the impact of risk and uncertainty in prediction and forecasting models. (Investopedia) Monte Carlo simulations are widely used in many industries whenever uncertainty exists. Investment firms use them to project potential earnings across different investment options. Insurance companies use them to forecast risk in different populations or areas. Companies like yours can use them to forecast when you'll finish a group of work! Simulations help make sense of uncertainty Rarely can you give someone a forecast and be absolutely, positively certain that you'll be right. There are just too many uncontrollable factors that can get in the way. When we are uncertain about something, one way to learn more is to run experiments. Are you unsure how likely it is that a flipped coin will land on heads? To find out, you can flipping it over many times and use the results to calculate the likelihood. Do you need to know how long it will take to complete a specific work item? You can use your past cycle times and calculate how long it's likely to take. Do you need to know when you'll finish a group of work instead of a single item? That's a bit more complex! Instead of manually doing groups of work repeatedly in real-time, we use Monte Carlo simulations to simulate doing the work thousands of times. Fortunately, it only takes seconds! Two questions you can answer When you're forecasting with Monte Carlo simulations you're likely trying to answer one of these two questions: When will this specific number of work items be completed? (Fixed scope) How many items can we complete by this specific date? (Fixed date) Projects aren't the only situations that require us to answer these questions. You may need to forecast when you'll get to an item that's near the top of your backlog or you may need to help decide how many items to plan for in your upcoming Sprint. I'm sure you can think of more if you stop and think! Running a Monte Carlo simulation To run a Monte Carlo simulation you'll need to provide a few things: A start date - Just as your GPS can't tell you when you'll arrive without knowing when you'll be leaving, the Monte Carlo can't tell you when you're likely to be finished if you don't provide a start date. Throughput data - The simulation samples real Throughput data to project how much you might finish on each day of every trial run by the simulation. There are thousands and thousands of these trial runs in each simulation. The outcome of each of those trial runs is recorded and allows you to calculate the odds of what might happen in the future. The fixed aspect - This is the desired end date when you have a fixed date or the number of items when you have a fixed scope. Using the results to create a forecast The tool you use for running the simulations controls how the results are presented. No matter how it is presented, it should provide you with the tools to create a probabilistic forecast - something like "There's an 85% chance that we'll finish in on or before August 4th." or "There's a 90% chance that we can do 15 or more items by ." Like those above, every probabilistic forecast should have two pieces of information (shown in bold above): a probability a range of outcomes The results can be shown in various ways depending on the tool you use. Here are two that we use in ActionableAgile: Histogram view - this is what you might think of as the raw data view. It shows the different outcomes that happened and the number of times each one happened. With this chart, it is quite simple to calculate probabilities. To find the 50% line, simply find the place where 5000 trials have taken place and then draw a line. For 85%, keep going until you find 8500 trials and do the same. Calendar view - this is a user-friendly view of the information in the histogram. This view makes it simple to talk about other dates what it might take to change the odds of hitting those. Considerations to keep in mind Monte Carlo simulations only factor in conditions that were present when you generated your historical data (team size, skill set, work policies, etc.). If those conditions significantly change, you'll need to generate new data under the new conditions in order to have reliable forecasts using Monte Carlo simulations (as you would with any forecasting method that uses historical data). Monte Carlo simulations often use random sampling of your historical data during the trial runs. When this is the case, as it is with our products, this means that every data point is as likely to be randomly selected as another. If you deliver 0 items most days, then most days in the simulation are likely to also have 0 delivered. In other words, you can deliver 10 items in the span of a week, but it matters how predictably those are distributed. You'll get better results if you work to have a more consistent delivery of a couple each day than aiming to deliver nothing for 4 days and 10 items on the fifth. To improve your predictability, and the resulting forecasts, focus on reducing Work Item Age. Want a tool to help you get started with Monte Carlo Simulations? You've definitely come to the right place. 55 Degrees offer two different products that use Monte Carlo Simulations to provide forecasts: Portfolio Forecaster for Jira and ActionableAgile (available as a SaaS app or embedded in Azure DevOps and Jira). In ActionableAgile you can use Monte Carlo Simulations to provide forecasts for fixed scope and fixed date efforts. Portfolio Forecaster uses Monte Carlo simulations to forecast your Jira Epics or Versions, taking into account how many are in progress at one time and your historical Throughput. Start now by trying either product for 30 days at no cost. We're always here to support you if you have any questions. Happy forecasting!
- What is a Cycle Time Scatterplot?
The Cycle Time Scatterplot chart is arguably the best way to view your Cycle Time data - the total elapsed time it took for individual items to move from one point of your workflow to another - usually from start to finish. Why is it the best? Because Cycle Time is all about time and the Scatterplot lets us see Cycle Time data in the context of time. The position on the horizontal axis tells us when the item was finished and the position on the vertical axis tells us how long the item(s) took. What can I learn from this chart? First and foremost, you can find the cycle time of an individual piece of work You can also see at a glance if your cycle times are getting more or less predictable by looking to see if the range of Cycle Times is increasing or decreasing You can see how long it took to complete work items in the past and use that to realistically forecast expectations for how long a work item may take to complete in the future. You can use this information to set a Service Level Expectation (SLE) for your team. This can be useful as an internal team metric to use in the context of current work item age to help maintain or improve predictability for the future. Additionally, you can learn about your process and the work that goes through it by exploring the clustering patterns of dots as well as the empty space on the chart. Asking questions about why the chart looks the way it does helps you learn about the impacts of certain decisions and events so you can make better decisions in the future. The Cycle Time Scatterplot in ActionableAgile The Cycle Time Scatterplot is the chart that you land on when you load ActionableAgile because it is the one that most people begin with on their journey to better flow. Excited to explore flow with your team? Try ActionableAgile for free today and reach out if you need any help via our support portal.
- What is probabilistic forecasting?
A probabilistic forecast is one that acknowledges a wide array of possible outcomes and assigns a probability, or likelihood of happening, to each. This makes it the perfect method for forecasting in uncertain situations like at work! What makes a forecast probabilistic? Every probabilistic forecast should have 2 components: a range and a probability. In the image above you see that there's a 15% chance that it will rain sometime between 12:00 and 13:00. It's not saying that it will rain the entire time. Just that, there's a 15% chance that sometime in that hour you will experience some rain. This also means that there's an 85% chance that you won't. What data is needed for probabilistic forecasting? When it comes to probabilistic forecasting at work, you're usually trying to answer one of these questions: When will this piece of work be done? aka How long will it take? (Single work item) When will this collection of work be done? (Multiple work items - fixed scope) How much work can we complete by a specific date? (Multiple work items - fixed date) To answer these questions you need to know some basic information about your past work: when each item started and when it finished. With this minimal data you can learn a lot about your system and what it can produce. Keep in mind this underlying rule of thumb: the conditions you had when you generated that data need to be roughly similar to the conditions you expect for the period you're forecasting. When you do this, you can use your data to forecast what is likely to happen in the future. How do I forecast an individual work item? To know how long it is likely to take for an individual piece of work to be completed, you want to look at how long it has taken you to complete work in the past. This data is called your Cycle Time. You can look at this data on a Cycle Time Scatterplot to quickly see what percentage of item finished in a certain range of time. The percentage you choose becomes your probability (component 1) and the range (component 2) is all the possible cycle times up to and including the line. You can see from the data above that 85% of work items finished in 16 days or less. You can turn that into the following probabilistic forecast: There's an 85% chance that you'll finish a work item in 16 days or less. By the way, this means from when it starts! What about larger efforts? If you need to provide a forecast that includes more than one item you can't just add individual forecasts together. You need to understand the rate at which you finished work in the past. Fortunately, that's exactly what the flow metric called Throughput tells yo. However, it is not as easy as looking at your Throughput data on a chart as you can with Cycle Time. If you use the Run Chart for this you can only look at what's likely to happen for one time unit. For most of your forecasts we'll need more than one of those. 😃 So, in these situations you can use a tool called a Monte Carlo simulation. (Learn more about Monte Carlo simulations) How do I forecast for a fixed scope? Sometimes you're trying to find out when a specific amount of work can be completed. That's what we call a fixed scope forecast. Entering a start date and the number of items you have in scope into a Monte Carlo Simulation can help you see how likely you are to finish that scope of work on any given day and it can tell you the probabilities of a range of outcomes using your data. For example, from that data in the image above you can say "There's an 85% chance that we'll finish this scope of work on or before May 11th". If that's not ideal you can look to other dates to see how likely they are and then have conversations about what you'll need to do to make that more likely or, perhaps, have discussions about changing expectations to be more realistic. How do I forecast for a fixed date? If you're working towards a fixed date rather than a fixed scope, the process is almost exactly the same but with one tiny twist. Instead of providing the number of items you have in scope, you provide your fixed date as a finish date. Now it can tell you how many items you are likely to finish by that date with any given probability. With the data from the image above, I can provide a probabilistic forecast: There's an 85% chance that we can finish 19 or more items by July 6th. Can I forecast my portfolio probabilistically? Probabilistic forecasting can be applied at a portfolio level using the same concepts but with some different tooling. At 55 Degrees, we forecast our portfolios using Throughput data and Monte Carlo simulations as explained above but there's an added consideration of how many multiple efforts we have happening concurrently alongside our Throughput and our chosen probability. Our simulation provides information such as how likely we are to finish a given effort by a fixed date and when we're likely to finish based on what remains in the effort and our recent throughput. Learn more about our Portfolio Forecaster here. Forecasting is not a one-time affair Meteorologists don't just give you a forecast for an upcoming storm when they first hear about it and then leave you without updates, right? You don't check your GPS before you leave to find out how long it will take and then shut it off do you? No, of course not. That would be silly. You'll absolutely want to re-run your forecasts regularly to see how they are affected by current conditions. This ensures that you find out any shifts as early as possible and minimize late surprises. (Help us make #continuousforecasting trend!) What are the benefits of probabilistic forecasting? Simply put, probabilistic forecasts are more inexpensive than traditional methods requiring expert estimation and work breakdown because they take very little time. Because they are cheaper, it is easier for you to provide those needed updates to your forecasts! Probabilistic forecasts are also more accurate because your data already accounts for factors that we struggle to incorporate into our estimates. Read about the German Tank Problem for an example of this. Cheaper. More accurate. No-brainer! In fact, as long as we are winning on even just one of those (and not sacrificing too much of the other) then it is worth a switch. But, you don't have to take anyone's word for it. It's easy to start doing probabilistic forecasting alongside whatever methods you traditionally use. That way you can see for yourself what works better for your context. Want to get started? You can do all of these forecasts by hand (or with excel). However, it might get a bit tedious. So, of course, 55 Degrees has an app for that and you can try them out for free for at least 30 days (or more depending on your platform). Check out ActionableAgile and Portfolio Forecaster today and reach out to us if you have any questions! Frequently asked questions Does all my work need to be the same size? Many people think that probabilistic forecasting won’t work for them if their work is varied in type or size. Your work doesn’t have to be the same size at all for this to work. Obviously, variation will cause the spread of possible outcomes to be wider. If you're not happy with the spread in the range, you can work to improve your process predictability. This will require you to consider many things about your process, one of which may be right-sizing work. However, if the data that goes into the Monte Carlo simulation reflects the variety of your work, the generated forecasts will reflect that variety as well. In fact, your data does a better job at reflecting all of the variety across the various conditions that impact your data than you ever could on your own! My forecast was wrong! What gives? The words “right” and “wrong” when it comes to forecasts should be re-evaluated. Using the language of probabilities reminds us that something unexpected can happen and disrupt our desired timelines. In truth, the only things we can be certain about are the things that have already been delivered. Outside of that, there is no 100% in a probabilistic forecast. If we said there was an 85% chance we would deliver work in 13 weeks or less and it took us 15, it doesn’t mean that we were wrong. We stated upfront there was a 15% chance that work would take longer. OK, am I really ready for probabilistic forecasting? Yes! Any team, even new teams, can use this type of forecasting. This feels like an advanced concept but it really isn’t. It’s just very different than what we’re used to. Don’t have historical data? Use estimates until you finish some work and then switch to using historical data.
- Analyzing Throughput in ActionableAgile
Throughput is a flow metric that tells us about the rate work items are finished in a given process. ActionableAgile has multiple charts that can give you information about your Throughput: Throughput Histogram Throughput Run Chart Cycle Time Scatterplot Cumulative Flow Diagram The first two are specifically made to relay information about the Throughput of your process. The last two happen to tell us throughput as a byproduct! Want to learn more about Throughput in general? Check out our "What is Throughput?" blog post. To learn more about these four charts in ActionableAgile, keep reading. Histogram The Throughput Histogram is a bar chart that displays how often you experience certain daily Throughput – in other words, the frequency of Throughput values. You can use the histogram to see what throughputs are most likely for ONE given instance of your time unit - one day or one week, etc. This is often not sufficient enough for forecasting across multiple instances of your time unit (multiple days, weeks, etc.). Read more about our Throughput Histogram in our product documentation. Run Chart The Throughput Run Chart is a line chart that shows you the variation in your Throughput data over time. This is, hands down, the best chart to use for straight Throughput analysis because of the time axis. We believe that all time-based metrics are best analyzed on a time-based chart. Time-based charts allow you to see patterns in your data over time and ask questions to learn more about how your team worked and why. You cannot discern this pattern-based information in a histogram. Read more about our Throughput Run Chart in our product documentation. Cycle Time Scatterplot The purpose of the Cycle Time Scatterplot is to tell us all about a different flow metric called Cycle Time. However, as the Cycle Time Scatterplot has data points representing all finished work across a time axis, we can look at those points and indirectly calculate Throughput values. In the Scatterplot, you'll toggle on the Summary Statistics box via the Chart Controls. In the example above, you can see that 305 work items were completed in 106 days. As you use other chart controls, including the date or item filters, the summary statistics will update so at any given time you see the total throughput for a set number of days. You do not see how the throughput values change over time as you do in the Run Chart. Read more about our Cycle Time Scatterplot in our product documentation. Cumulative Flow Diagram The Cumulative Flow Diagram is a stacked area chart that is built by adding information from a daily snapshot of your process each day. One of the things you can see in the Cumulative Flow Diagram is how many items left one part of the process and entered the next part. Because Throughput is defined as the number of items that finish in a given unit of time, you can get Throughput information by looking at how area band that denotes your "finished state" is changing over time. However, the CFD doesn't provide this information for you at a glance. That's what the Throughput Run Chart is for. The other related information you can get from the CFD is the average throughput, also known as the average departure rate. You see this by turning on the rate lines. Read more about our Cumulative Flow Diagram in our product documentation. In summary... There are many ways to learn about the Throughput of your process in ActionableAgile. So, here are our suggestions: Use the Throughput Run Chart for seeing how your Throughput changes over time. Use the Cumulative Flow Diagram to see how Throughput interacts with other flow metrics. Finally, use Monte Carlo simulations that work with your Throughput data to forecast efforts containing multiple work items. Excited to explore flow with your team? Try ActionableAgile for free today and reach out if you need any help via our support portal.
- Analyzing WIP in ActionableAgile
WIP (or Work In Progress) is a flow metric that tells us how many work items are in progress at any given time in any process - that is items that have started but not yet finished. Once you know how to measure WIP, you will want to start analyzing the data. There are three charts in ActionableAgile that provide insights into current and past WIP levels. WIP Run Chart Aging Work in Progress Chart Cumulative Flow Diagram WIP Run Chart The WIP Run Chart is a line chart that shows the number of items in progress per day across time. With this ability to clearly see how WIP levels change over time, you can get early signals of changes in Cycle Time and Throughput - for better or worse! This allows you to have better conversations about the impact of WIP on your process. Learn more about the WIP Run Chart in our product documentation. Aging Work in Progress Chart Another chart where WIP can be seen is the Aging Work in Progress chart. The primary purpose of this chart is to analyze another flow metric, Work Item Age, but you can also calculate WIP for the day being viewed. While you can click on a dot in the WIP Run Chart to see which items were in progress on a given day, this chart allows you to see more details about the WIP from any given day in greater detail. From here you can see what workflow status each work item is in as well as the age of each work item. On this chart you can use Aging Replay control to see this information about WIP for any day reflected in your data. Learn more about the Aging Work In Progress Chart in our product documentation. Cumulative Flow Diagram The final chart that provides insight into WIP within ActionableAgile is the Cumulative Flow Diagram. This chart provides a visualization of the interplay between WIP, Cycle Time, and Throughput. The height of the color bands in the CFD show you an actual count of items in each workflow stage on any given day. You can use the chart’s WIP Tooltips control to show WIP by stage, or collectively as a system, as your cursor moves through the timeline. By looking at the thickness of the color band(s) over time, you can see how WIP changes and the correlating change in Approximate Average Cycle Time and Average Throughput. You may even be able to help determine good WIP limits by looking how much WIP you had when Throughput and Cycle Time were ideal. Learn more about the Cumulative Flow Diagram in our product documentation. In Summary... There are many ways to learn about the WIP in your process with ActionableAgile. So, here are our suggestions: Use the WIP Run Chart for seeing how your WIP changes over time. Use the Aging Work in Progress Chart to learn more about the WIP from any given day. Use the Cumulative Flow Diagram to see how WIP interacts with other flow metrics and decide on any adjustments you might need to make in your WIP levels. Excited to explore flow with your team? Try ActionableAgile for free today and reach out if you need any help via our support portal.
- Managing Work Item Age in ActionableAgile
Work Item Age is the elapsed time since the work item started. It is one of four key flow metrics alongside Cycle Time, Throughput, and WIP. Of the four flow metrics it is, arguably, the most important because controlling age is a key way to improve process predictability. ActionableAgile provides a feature-rich Aging Work in Progress chart to help you measure and control Work Item Age. The Aging Work In Progress Chart The Aging Work in Progress chart is a lot like a visual board as the columns reflect your workflow stages and the items show as dots in the appropriate column. The vertical placement of the dot reflects the items Work Item Age. A dot may reflect more than one work item if they are in the same workflow stage and have the same Work Item Age. How to use the chart to manage Work Item Age Only while an item appears on this chart can you exert any control over where it will end up in your Cycle Time data. If you look at the last column of this chart you will notice that there are no work items represented. When an item reaches this workflow stage, it is complete and appears as historical data on your Cycle Time Scatterplot instead. Nothing you do now can change how long it took to complete that item. Because you use Cycle Time data to answer “How long will it take?” for a single work item, Work Item Age should be a key consideration when making your plan for the day. But, knowing the age of a work item isn’t enough information on its own. In order to know if the age of a work item is bad, good, or indifferent you need context. ActionableAgile overlays percentile lines from the Cycle Time data to add this context right where you need it. In the image above below you can see that 85% of past items have finished in 16 days or less. Now, you can keep that in mind as you track work items and make daily plans. If you want to maintain that level of predictability, you’ll need to continue to finish 85% of work items in 16 days or less. Getting early signals of slow work It's easy to know if an item near the end of the workflow is in danger of finishing beyond the desired age. Knowing that about items early in the workflow is more difficult. ActionableAgile’s pace percentiles help provide early signals that work is moving at a slower pace than past work. Learn more about the Aging Work in Progress chart and the various chart controls in our product documentation. In Summary... If you can only measure and manage one thing, make it Work Item Age. At its core, Work Item Age is a process improvement metric. When you see items aging more than expected, you can experiment with tactics to see if they help. There is no single fix but common tactics include limiting WIP, controlling work item size, reducing dependencies, and more. Once you manage Work Item Age, your Cycle Time data should stabilize and make forecasting easier! Excited to explore flow with your team? Try ActionableAgile for free today and reach out if you need any help via our support portal.
- When an Equation Isn't Equal
This is post 1 of 9 in our Little's Law series. Try an experiment for me. Assuming you are tracking flow metrics for your process -- which if you are reading this blog, you probably are -- and calculate your average Cycle Time, your average Work in Progress (WIP), and your average Throughput for the past 60-ish days. [Note: what data to collect and how to turn that data into the four basic metrics of flow is covered in a previous blog post]. The exact number of days doesn't really matter as long as it is arbitrarily long enough for your context. That is, if you have the data, you could even try this experiment for longer or shorter periods of time. Now take your historical average WIP and divide it by your historical average Throughput. When you do that, do you get your historical average Cycle Time exactly? Another quick disclaimer, for the purposes of this experiment, it is best if you don't pick a time period that starts with zero WIP and ends with zero WIP. For example, if you are one of the very few lucky Scrum teams that starts all of your Sprints with no PBIs already in progress, and all PBIs that you start within a Sprint finish by the end of the Sprint, then please don't choose the first day of the Sprint and the last day of the Sprint as the start and endpoint for your calculation. That's technically cheating, and we'll explain why in a later post. You've probably realized by now that we are testing the equation commonly referred to as Little's Law (LL): CT = WIP / TH where CT is the average CT of your process over a given time period, WIP is the average Work In Progress of your process for the same time period, and TH is the average Throughput of your process for the same time period. It may seem obvious, but LL is an equation that relates three basic metrics of flow. Yes, you read that right. LL is an equation. As in equal. Not approximate. Equal. In your above experiment, was your calculation equal? My guess is not. Here's an example of metrics from a team that I worked with recently (60 days of historical data): WIP: 19.54, TH: 1.15, CT: 10.3 In this example, WIP / TH is 16.99, not 10.3. For a different 60-day period, the numbers are: WIP: 13.18, TH: 1.03, CT: 9.1 This time, WIP / TH is 12.80, not 9.1. And one last example: WIP: 27.10, TH: 3.55, CT: 8.83, WIP / TH is 7.63, not 8.83. Better, but still not equal. If you are currently using the ActionableAgile tool, then doing these calculations is relatively easy. Simply load your data, bring up the Cumulative Flow Diagram (not that I normally recommend you use the CFD), and select "Summary Statistics" from the right sidebar. Here is a screenshot from an arbitrary date range I chose using AA's preloaded example data: From the above image, you'll see that: WIP: 26.40, TH: 3.04, CT: 9.48 However, 26.40 / 3.04 is 8.68, not 9.48. As evidence that I didn't purposefully select a date range that proved my point, here's another screenshot: Where 28.11 / 3.51 equals 8.01, not 8.86. In fact, I'd be willing to bet that in this example data -- which is from a real team, by the way -- it would be difficult to find an arbitrarily long time period where Average Cycle Time actually equals Average WIP divided by Average Throughput. Just look at the summary stats for the whole date range of pre-loaded data to see what I'm talking about: 21.21 / 2.31 equals 9.18, not 9.37 -- still close, but no cigars. I'd be willing to bet that you had (or will have) similar results with your own data. If you tried even shorter historical time periods, then the results might even be more dramatic. So what's going on here? How can something that professes to be an equation be anything but equal? We'll explore the exact reason why LL doesn't "work" with your data in an upcoming blog post, but for now, we'll actually need to take a step back and explore how we got into this mess, to begin with. After all, it is very difficult to know where we are going if we don't even know where we came from... Explore all entries in this series When an Equation Isn't Equal (this article) A (Very) Brief History of Little's Law The Two Faces of Little's Law One Law. Two Equations It's Always the Assumptions The Most Important Metric of Little's Law Isn't In the Equation How NOT to use Little's Law Other Myths About Little's Law Little's Law - Why You Should Care About Daniel Vacanti, Guest Writer Daniel Vacanti is the author of the highly-praised books "When will it be done?" and "Actionable Agile Metrics for Predictability" and the original mind behind the ActionableAgile™️ Analytics Tool. Recently, he co-founded ProKanban.org, an inclusive community where everyone can learn about Professional Kanban, and he co-authored their Kanban Guide. When he is not playing tennis in the Florida sunshine or whisky tasting in Scotland, Daniel can be found speaking on the international conference circuit, teaching classes, and creating amazing content for people like us.
- Probabilistic vs. deterministic forecasting
Do you hear people throwing around words like probabilistic and deterministic forecasting, and you aren't sure exactly what they mean? Well, I'm writing this blog post specifically for you. Spoiler alert: it has to do with uncertainty vs. certainty. Forecasting is the process of making predictions based on past and present data (Wikipedia). Historically the type of forecasting used for business planning was deterministic (or point) forecasting. Increasingly, however, companies are embracing probabilistic forecasting as a way to help understand risk. What is deterministic forecasting? Just like fight club, people don't really talk about deterministic forecasting. It is just what they do, and they don't question it - at least until recently. I mean, if it is all someone knows, why would they even think to question it or explore the pros and cons? But what is it really? Deterministic forecasting is when only one possible outcome is given without any context around the likelihood of that outcome occurring. Statements like these are deterministic forecasts: It will rain at 1 P.M. Seventy people will cross this intersection today. My team will finish ten work items this week. This project will be done on June 3rd. For each of those statements, we know that something else could happen. But we have picked a specific possible outcome to communicate. Now, when someone hears or reads these statements, they do what comes naturally to humans... they fill in the gaps of information with what they want to be true. Usually, what they see or hear is that these statements are absolutely certain to happen. It makes sense. We've given them no alternative information. So, the problem with giving a deterministic forecast when more than one possible outcome really exists is that we aren't giving anyone, including ourselves, any information about the risk associated with the forecast we provided. How likely is it truly to happen? Deterministic forecasts communicate a single outcome with no information about risk. If there are factors that could come into play that could change the outcome, say external risks or sick employees, then deterministic forecasting doesn't work for us. It doesn't allow us to give that information to others. Fortunately, there's an alternative - probabilistic forecasting. What is probabilistic forecasting? A probabilistic forecast is one that acknowledges the range of possible outcomes and assigns a probability, or likelihood of happening, to each. The image above is a histogram showing the range of possible outcomes from a Monte Carlo simulation I ran. The question I effectively asked it was "How many items we can complete in 13 days?" Now, there are a lot of possible answers to that question. In fact, each bar on the histogram represents a different option - anywhere from 1 to 75 or more. We can, and probably should, work to make that range tighter. But, in the meantime, we can create a forecast by understanding the risk we are willing to take on. In the image above we see that in approx 85% of the 10,000 trials we finished at least 19 items in 13 days. This means we can say that, if our conditions stay roughly similar, there's an 85% chance that we can finish at least 19 items in 13 days. That means that there's a 15% chance we could finish 18 or less. Now I can discuss that with my team and my stakeholders and make decisions to move forward or to see what we can do to improve the likelihood of the answer we'd rather have. Here are some more probabilistic forecasts: There is a 70% chance of rain between now and 1 P.M. There's an 85% chance that at least seventy people will cross this intersection today. There's a 90% chance that my team will finish ten or more work items this week. There's only a 50% chance that this project will be done on or before June 3rd. Every probabilistic forecast has two components: a range and a probability, allowing you to make informed decisions. Learn more about probabilistic forecasts Which should I use? To answer this question you have to answer another: Can you be sure that there's a single possible outcome or are there factors that could cause other possibilities? In other words, do you have certainty or uncertainty? If the answer is certainty, then deterministic forecasts are right for you. However, that is rarely, if ever, the case. It is easy to give into the allure of the single answer provided by a deterministic forecast. It feels confident. Safe. Easy. Unfortunately, those feelings are an illusion. Deterministic forecasts are often created using qualitative information and estimates but, historically, humans are really bad at estimating. Our brains just can't account for all the possible factors. Even if you were to use data to create a deterministic forecast you still have to pick an outcome to use and often people choose the average. Is it ok being wrong half the time? It is better to be vaguely right than exactly wrong. Carveth Read (1920) If the answer is uncertainty (like the rest of us) then probabilistic forecasts are the smart choice. By providing the range of outcomes and the probability of each (or a set) happening, you give significantly more information about the risk involved with any forecast, allowing people to make more informed decisions. Yes, it's not the tidy single answer that people want but its your truth. Carveth Read said it well: "It is better to be vaguely right than exactly wrong." Remember that the point of forecasting is to manage risk. So, use the technique that provides as much information about risk as possible. How can I get started? First, gather data about when work items start and finish. If you're using work management tools like Jira or Azure DevOps then you are already capturing that data. With that information you can use charts and simulations to forecast how long it takes to finish a single work item, how many work items you can finish in a fixed time period, or even how long it can take you to finish a fixed scope of work. These are things we get asked to do all the time. You don't even need a lot of data. If you. have at least 10 work items, preferably a representative mix, then you have enough data to create probabilistic forecasts. Once you have the data you need, tools like ActionableAgile™️ and Portfolio Forecaster from 55 Degrees help you determine the forecast that matches your risk tolerance with ease. You can also use our tools to improve the predictability of your process. When you do that you are happier with your forecasts because you get higher probability with a narrower range of outcomes. If you're interested in chatting with us or other users on this topic, join us in our community and create a post! See you there!
- Is your workflow hiding key signals?
There are lots of signals that you can get from visualizing your work - especially on a Kanban board. You can see bottlenecks, blockers, and excess work-in-progress, but one signal you don't often get to see is the answer to the question, "How much longer from here?" Now, to get that signal, you have to have a process that models flow. By flow, I mean the movement of potential value through a system. Your workflow is intended to be a model of that system. When built in that way, your workflow allows you to visualize and manage how your potential value moves through your system. Managing flow is managing liability and risk A tip is to look at your workflow from a financial perspective. Work items you haven't started are options that, when exercised, could deliver value. Work items you have finished are (hopefully) assets delivering value. The remainder - all the work items that you've spent time and money on but haven't received any value in return yet (work-in-progress) are your liabilities. What this helps us clearly demonstrate is that our work-in-progress is where most of our risk lies. Yes, we could have delivered things that don't add value (and hopefully, there are feedback loops to help identify those situations and learn from them.) You can also have options that you really should be working on to maximize the long-term value they can provide. But, by far, the biggest risk we face is taking on too much liability and not managing that liability effectively - causing us to spend more time and money than we should to turn them into assets. Expectations versus reality We, humans, have a tendency to look at things with rose-colored glasses (ok, most of us do.) So, when we start a piece of work, we think it will have a nice, straight, and effective trip through the workflow with no u-turns or roadblocks. More often than not, that's not the case, and there are many reasons for that. One of the biggest reasons is how we build our workflow. When we build our workflow to model the linear progression of work as it moves from an option to an asset, you're more likely to have that straight path. If you build your workflow to model anything else - especially the different groups of people that will work on it then you end up with an erratic path. You can get a picture of how work moves between people (if you use tools like Inspekt). But what you don't get is a picture of how work moves through a lifecycle from option to asset. This is a problem if you think you're using your workflow to help optimize flow because you aren't seeing the signals you think you are. In a situation like this, what you have is a people flow -- not a work flow. That's great if you want to focus purely on managing resource efficiency (keeping people busy) but poor if you want to optimize flow and control your liabilities. The signal you can only get from a true workflow Once you can truly say that you have modeled the life cycle of turning options into assets, you can say that a card's position in the workflow reflects how close or far away it is from realizing its potential value. What this means is that when you move to the right in your workflow, then you're signaling you're closer to turning the liability into an asset, and when you move it to the left (backward) in your workflow, you're moving farther away from that outcome. (Does it make more sense now why we handle backward movement the way we do in ActionableAgile now?) Model your workflow so that how you move a work item is signal of movement towards or away from realising its potential value When you can say this, then you can start signaling how long an item is likely to take to become an asset. With tools like ActionableAgile's Cycle Time Scatterplot, you can see how long it's likely to take for an item to be completed from any workflow stage. It's like when you go to Disney World or someplace like it, and you're in line for a ride, and you see a sign that says your wait is 1 hour from this point. Each column of your workflow can have that metaphorical sign. Except you can also know the likelihood associated with that information. Want to make a change? Don't stress if you just learned that your workflow isn't all it's cracked up to be. You can make a change! It's all about board design and policies. If you want tips on how to change your board or process, check out my blog post on how to design your board to focus on flow, or watch my talk below on this topic from Lean Agile London 2022!
- The Deviance of Standard Deviation
Before getting too far into this post, there are two references that do a far better job than I ever will at explaining the deficiency of the standard deviation statistic: "The Flaw of Averages" by Dr. Sam Savage (https://www.flawofaverages.com/) Pretty much anything written by Dr. Donald Wheeler (spcpress.com) Why is the standard deviation so popular? Because that's what students are taught. It's that simple. Not because it is correct. Not because it is applicable in all circumstances. It is just what everyone learns. Even if you haven't taken a formal statistics class, somewhere along the line, you were taught that when presented with a set of data, the first thing you do is calculate an average (arithmetic mean) and a standard deviation. Why were taught that? It turns out there's not a really good answer to that. An unsatisfactory answer, however, would involve the history of the normal distribution (Gaussian) and how over the past century or so, the Gaussian distribution has come to dominate statistical analysis (its applicability--or, rather, inapplicability--for this purpose would be a good topic for another blog, so please leave a comment letting us know your interest). To whet your appetite on that topic, please see Bernoulli's Fallacy by Aubrey Clayton. Arithmetic means and standard deviations are what is known as descriptive statistics. An arithmetic mean describes the location of the center of a given dataset, while the standard deviation describes the data's dispersion. For example, say we are looking at Cycle Time data and we find that it has a mean of 12 and a standard deviation of 4.7. What does that really tell you? Well, actually, it tells you almost nothing--at least almost nothing that you really care about. The problem is that in our world, we are not concerned so much with describing our data as we are with doing proper analysis on it. Specifically, what we really care about is being able to identify possible process changes (signal) that may require action on our part. The standard deviation statistic is wholly unsuited to this pursuit. Why? First and foremost, the nature of how the standard deviation statistic is calculated makes it very susceptible to extreme outliers. A classic joke I use all the time is: imagine that the world's richest person walks into a pub. The average wealth of everyone in the pub is somewhere in the billions, and the standard deviation of wealth in the pub is somewhere in the billions. However, you know that if you were to walk up to any other person in the pub, that person would not be a billionaire. So what have you really learned from those descriptive statistics? This leads us to the second deficiency of the standard deviation statistic. Whenever you calculate a standard deviation, you are making a big assumption about your data (recall my earlier post about assumptions when applying theory?). Namely, you are making an assumption that all of your data has come from a single population. This assumption is not talked about much in statistical circles. According to Dr. Wheeler, "The descriptive statistics taught in introductory classes are appropriate summaries for homogeneous collections of data. But the real world has many ways of creating non-homogeneous data sets.." (https://spcpress.com/pdf/DJW377.pdf). In our pub example above, is it reasonable to assume that we are talking about a single population of peoples' wealth that shares the same characteristics? Or is it reasonable that some signal exists as evidence that one certain data point isn't routine? Take the cliched example from the probability of pulling selecting marbles from an urn. The setup usually concerns a single urn that contains two different coloured marbles--say red and white--in a given ratio. Then some question is asked, like if you select a single marble, what is the probability it will be red? The problem is that in the "real world," your data is not generated by choosing different coloured marbles from an urn. Most likely, you don't know if you are selecting from one urn or several urns. You don't know if your urns contain red marbles, white marbles, blue marbles, bicycles, or tennis racquets. Your data is generated by a process where things can--and do--change, go wrong, encompass multiple systems, etc. It is generated by potentially different influences under different circumstances with different impacts. In those situations, you don't need a set of descriptive statistics that assume a single population. What you need to do is analysis on your data to find evidence of signal of multiple (or changing) populations. In Wheeler's nomenclature, what you need to do is first determine if your data is homogenous or not. Now, here's where proponents of the standard deviation statistic will say that to find signal, all you do is take your arithmetic mean and start adding or subtracting standard deviations to it. For example, they will say that roughly 99.7% of all data should fall within your mean plus or minus 3 standard deviations. Thus, if you get a point outside of that, you have found signal. Putting aside for a minute the fact that this type of analysis ignores the assumptions I just outlined, this example brings into play yet another dangerous assumption of the standard deviation. When starting to couple percentages with a standard deviation (like 68.2%, 95.5%, 99.7%, etc.), you are making another big assumption that your data is normally distributed. I'm here to tell you that most real-world process data is NOT normally distributed. So what's the alternative? As a good first approximation, a great place to start is with the percentile approach that we utilize with ActionableAgile Analytics (see, for example, this blog post). This approach makes no assumptions about single populations, underlying distributions, etc. If you want to be a little more statistically rigorous (which at some point you will want to be), then you will need the Process Behaviour Chart advocated by Dr. Donald Wheeler's continuation of Dr. Walter Shewhart's work. A deeper discussion of the Shewhart/Wheeler approach is a whole blog series on its own that, if you are lucky, may be coming to a blog site near you soon. So, to sum up, the standard deviation statistic is an inadequate tool for data analysis because it: Is easily influenced by outliers (which your data probably has) Often assumes a normal distribution (which your data doesn't follow) Assumes a single population (which your data likely doesn't possess) Any analysis performed on top of these flaws is almost guaranteed to be invalid. One last thing. Here's a quote from Atlassian's own website: "The standard deviation gives you an indication of the level of confidence that you can have in the data. For example, if there is a narrow blue band (low standard deviation), you can be confident that the cycle time of future issues will be close to the rolling average." There are so many things wrong with this statement that I don't even know where to begin. So please help me out by leaving some of your own comments about this on the 55 Degrees community site. Happy Analysis! About Daniel Vacanti, Guest Writer Daniel Vacanti is the author of the highly-praised books "When will it be done?" and "Actionable Agile Metrics for Predictability" and the original mind behind the ActionableAgile™️ Analytics Tool. Recently, he co-founded ProKanban.org, an inclusive community where everyone can learn about Professional Kanban, and he co-authored their Kanban Guide. When he is not playing tennis in the Florida sunshine or whisky tasting in Scotland, Daniel can be found speaking on the international conference circuit, teaching classes, and creating amazing content for people like us.
- How do you use pace percentiles on ActionableAgile's aging chart?
It is inevitable that there are ways that the software creator intends a feature to be used, and there are ways that it ends up being used. 🤓 Sometimes, these unintended uses can be even better than the initial idea, but other times, they can end up causing harm. In a recent chat with Daniel Vacanti, we discussed this very thing about ActionableAgile™️ Analytics. I can say I was more than mildly surprised when one of my favorite features came up: the pace percentile feature on ActionableAgile's Work Item Aging chart. I love this feature because it helps you get early signals of slow work. However, after talking to and training many people, Dan saw that people very often misinterpret what this particular signal really tells us. How did he come to that conclusion? He talked to them about the decisions they would make because of the signals and saw that they weren't necessarily picking up what was intended. Instead, the decisions people were likely to make could lead to worse outcomes than currently presented on the chart. What do you think? Are you interpreting the signals correctly? Head over to our user community to discuss!