Forecasting is inherently probabilistic – meaning that we cannot say anything definitively about the future, we can only give the probability that something may happen. This is intrinsic to software development but a hard concept to convey to those who just want to know “when will the work be done?”. In this conversation we speak with Kanban experts and the co-hosts of the Drunk Agile Podcast: Dan Vacanti and Prateek Singh. We talk about flow, forecasting, service level expectations, work item aging, and more!
Daniel Vacanti and Prateek Singh are both expert Kanban thought leaders, trainers and consultants, and they are co-hosts of the “Drunk Agile” podcast which you can find on YouTube. They are also both on the advisory board of ProKanban.org. Prateek is co-author of “The Kanban Pocket Guide”, and Dan is also author of “Actionable Agile Metrics for Predictability”, and co-author of “The Kanban Guide.”
Probabilistic Forecasting helps us worry less about estimates, and focus more on making good business decisions. Be sure to subscribe to the Scaling Tech Podcast today on YouTube, Spotify and Apple Podcasts so you never miss out on an episode!
Listen on Spotify
Listen on Apple Podcasts
Watch the video:
Show notes with links to jump ahead are below
Show Notes from Episode 29 – Dan Vacanti and Prateek Singh on Kanban and Probabilistic Forecasting
Timestamp links will open that part of the show in YouTube in a new window
-
- Introduction
- 00:00 The opening quote from Prateek Singh is about how developers often see agile as something that gets in their way of getting work done. “The thing that engineering leaders are really concerned about, is are we predictable enough, are we efficient enough, are we delivering the right things?”
- Dan Vacanti’s opening quote is about the importance of understanding probabilistic approaches: “The questions that teams get asked, things like ‘when will it be done?’ … they are asked when the team has the least amount of information … and the way the world works, whenever you have uncertainty involved, that demands a probabilistic approach.”
- After the opening quote, Arin talks about how we have interviewed Dan Vacanti previously on the Scaling Tech Podcast, and he’s the first return guest that we’ve had. This episode also marks the first time that we’ve had two guests on in an episode. David notes that the deep expertise from Dan and Prateek about Kanban means this is a great episode to learn more about Kanban and Probabilistic Forecasting. Arin and David agree that David’s conversation with Dan around rightsizing and service level expectations is very insightful.
- Arin also notes that we had some recording issues with this episode due to bad connections, but we tried to edit out the people with bad connections, and so hopefully that does not show up too much in the final product.
- Introductions and Informalities
- 01:25 Arin introduces our guests: “Daniel Vacanti and Prateek Singh are both expert Kanban thought leaders, trainers and consultants, and they are co-hosts of the “Drunk Agile” podcast which you can find on YouTube. They are also both on the advisory board of ProKanban.org. Prateek is co-author of “The Kanban Pocket Guide”, and Dan is also author of “Actionable Agile Metrics for Predictability”, and co-author of “The Kanban Guide.”
- After the introductions, we joke about sharing a bourbon with Dan during the recording of our previous episode, but since we are recording this episode earlier in the day, only Arin was prepared to drink a straight bourbon in the morning.
- As we move to more serious topics, Prateek explains how he and Dan met in 2013 and then eventually started the Drunk Agile podcast during the pandemic as a fun way to have the same conversations about problem solving and lean methods.
- Probabilistic Forecasting
- 08:24 Arin asks the guests to start with a discussion about forecasting in Kanban and then talk in more detail about Probabilistic Forecasting. Prateek kicks off the conversation by talking about how you can gather metrics in a Kanban team to measure work flowing through a system, and then use those real world metrics to then forecast when stuff will be done.
- Dan talks about how probability is intrinsic to the way we work, and Kanban reflects that in its probabilistic forecasting techniques. Dan states that “from a probabilistic side of things, the questions that teams are asked, things like when will it be done? How much is this going to cost? Et cetera, et cetera, et cetera. If you think about when those questions are asked, they’re asked when the team has the least amount of information about the problem that they’re trying to solve. They’re asked when we are confronted with a problem that is dominated by uncertainty. And the way the world works isn’t just something that Pratik and I made up. It’s just the way the world works. Whenever you have uncertainty involved, that demands a probabilistic approach.”
- “So if somebody says, when will it be done? You can’t give them an exact deterministic answer. But what we can probably do using the metrics that Pratik just mentioned is quantify some level of risk and say, hey, based on some outcome that you’re looking for, here is your percentage chances of that happening.”
- Arin brings up the classic “Cone of Uncertainty” that was popularized by Steve McConnell in the 90’s, which says that we can be more accurate in our estimates over time, but we really don’t know anything at the beginning of a project. Dan talks about how he feels the “Cone of Uncertainty” did a disservice to our industry, because the reality is that uncertainty never really goes away, even as a project progresses. He gives the analogy of hurricane forecasts, which use a cone to show likelihood of where a hurrican will make landfall. With each updated forecast, the cone stays essentially the same size, it’s position or direction just shifts. So the uncertainty about the future is never really going away, we are just more confident about the immediate next positions of the hurricane. Dan says software is the same, we never actually drive out uncertainty to converge on some precise numerical estimate of a distant point in the future.
- Making Business Decisions Based on Forecasts
- 13:45 Arin talks about one of the most common biases in estimation, referred to sometimes as the expert bias. The idea is that an expert says “yes, I was wrong before, but now that I know what went wrong before, I can now provide a very accurate estimate.” But in reality, they are still likely to be wrong about other things that they have not anticipated, and they should not assume their estimates will be any more accurate on future rounds of similar work.
- Prateek points out that this gets to a problem senior leaders have: They are trying to base an economic decision on estimates. They need to decide what to work on, what not to work on, and if they cannot get accurate estimates from their team then it’s hard to make business decisios based on those estimates. The more that we can communicate the level of risk wth an estimate, the more that helps senior leaders to make better business decisions.
- Prateek explains the better conversations this leads to: “If you want to get stuff done by August 31st, sure, we can get stuff done by August 31st, but there’s about a 40% chance of that happening. Are you okay taking that 60% risk? Now we can have a smarter, more adult economic conversation of what is the risk we’re willing to take.”
- The “Flaw” of Averages and Little’s Law
- 16:38 David asks about good examples of probabilistic forecasting in Kanban. He and Dan discuss how scatter plots help you to see cycle times for individual work times, and then to have estimates of how long it takes to see 85% of items completed. Which means you can take that overall cycle time metric for a given time period, across all items completed, and say that “with 85% certainty, work items will be completed within a cycle time of X.”
- Dan talks about how using that sort of data, we can have more informed conversations about what is an “economically justifiable bet.” How much risk are we willing to take on? This is better than talking about simple averages.
- Dan also attempts to give us a short description of Little’s Law, which is about looking backward, not forward, and so shouldn’t really be used for forecasting. Dan explains that “people forget that Little’s law is a relationship of averages, essentially arithmetic means. It’s a relationship of averages. Even if you could use that formula to do a forecast, you’d be making a forecast that’s based on an average and this is where, as anybody who’s listened to the Drunk Agile podcast knows, anytime you hear the word average, the very next thing that you need to think about is the flaw of averages. The book called The Flaw of Averages by Dr. Sam Savage, which the basic premise is, plans based on average fail on average. Absolutely, make a forecast based on an average, but you are going to be wrong an average amount of time. Again, is that the risk you’re willing to live with?”
- Service Level Expectations and Work Item Aging
- 22:31 David brings up a part of the Kanban Guide, which discusses Service Level Expectations (SLE) and how they change the way we forecast. Dan explains that essentially the SLE becomes the forecast you provide to others. To do this, the SLE is calculated based on your historical cycle time, which is based on real world data. Depending on our risk threshold, how often are we willing to be wrong with a forecast? 15% of the time, 305 of the time? Whatever your risk threshold is, then look at historical data for a cycle time that ensures you won’t be wrong more than that percentage of the time.
- The SLE ends up being a statement like “70% of our items take eight days or less to complete.” It’s still an expectation, not a commitment, but it’s a probabilistic statement based on real data.
- Dan and David talk about how the SLE relates to identifying aging work items. Dan notes that “work item aging” is a metric not used in other frameworks, and something he finds very powerful. As items flow through the system, they age, and we can compare that age to our service level expectation. If the work item aging is approaching the SLE then that means things are not moving through the system well and it should prompt some conversations to assess why and remedy it.
- Work Item Sizing and Right Sizing
- 26:05 David brings up the topic of work item size, and Dan uses this point to dispel a big myth about Kanban. Kanban does not require that all work items be of a similar size. Many of the academic works that Kanban is based on, such as Deming’s work, say that variation in your item sizing is going to exist and there’s nothing you can do to eliminate that variation.
- Dan says that based on that, we should “actually embrace the fact that there is going to be this variation in size. So the question now becomes, because we know that there’s going to be variation in size, the question becomes, well, how much variation is too much variation? At what point does something maybe become too big or even potentially too small? You know, how do we get that signal that something is too big versus too small and only take action when we get that signal? So don’t worry about up front trying to get everything to be the same size.”
- This leads Dan to discuss “Right Sizing”, which is also discussed in the Kanban Pocket Guide. “Right sizing is just this idea [that] before we pull an item into our process, having a quick conversation and say, hey, based on what we know about this thing right now, which we recognize is not very much, we don’t know very much about this thing. But based on what we know about it, do we think we can fit it within the SLE? The way that it’s defined right now, do we think it fits within the SLE? If the answer to that is yes, conversation’s over, you start working. If the answer to that is no, you start talking about, okay, how can we break things up … such that it will fit within the SLE?”
- David and Dan close out this segment by talking about comparisons between Kanban, Scrum and other frameworks like SAFe. Dan says that the flow metrics he and Prateek are discussing really don’t require that you use any particular framework, including Kanban. You can measure flow in any system or framework, it’s just that Kanban provides a framework that makes it easier to measure flow.
- A message from our sponsor: WebRTC.ventures
- 34:58 Building custom WebRTC video applications is hard, but your go live doesn’t have to be stressful. WebRTC.ventures is here to help you build, integrate, assess, test, and manage your live video application for web or mobile! Show notes continue below the ad
- Agile Engineering
- 35:48 Arin asks Prateek to reflect on what aspects of agile do people typically miss when they thinkg about software engineering in particular? Prateek notes that in his experience, “it’s interesting when you talk to developers, they see Agile as either just something on the side that’s happening or really something that’s getting in their way of getting work done.” Prateek says it’s important to roll back this perspective for engineering leaders.
- Prateek notes that what engineering leaders are really concerned about is predictability, efficiency, and are we delivering the right things? Flow metrics can help leaders to see those things in the team. The worst thing for a developer is when they are heads down and focused on their work, and someone comes in to ask them about estimates or story points. Prateek notes that developers don’t want to be taken away from their work for hours of meetings. So if some meetings are required, how do we make them as efficient as possible? Prateek and Dan are focused on bringing flow metrics to other methods besides just Kanban to help all teams be more efficient in their work.
- ProKanban.org
- 40:20 Arin asks Prateek to say more about the work that he and Dan do with ProKanban.org, and Prateek explains that it’s a community of people interested in and helping to promote the simple strategies discussed in our podcast today. Dan is one of the founders, Colleen Johnson is the CEO. Prateek is the head of Learning and Development, so he helps Kanban trainers to be more effective and efficient in their work.
- Dan adds that ProKanban also serves as a safe, diverse and inclusive community where people can go to learn more about Kanban. Making it a safe space was a key goal for Dan and Colleen when they started it.
- Conclusion
- 41:50 To learn more about Dan and Prateek’s work, check out the links below, and be sure to follow the Drunk Agile podcast on YouTube! They read all the comments and respond to them. You can also see a lot about their work on their LinkedIn profiles and contact them there, as well as be sure to visit ProKanban.org!
Links from Episode 29 – Dan Vacanti and Prateek Singh on Kanban and Probabilistic Forecasting
- Dan Vacanti on LinkedIn Connect with Dan!
- Prateek Singh on LinkedIn Connect with Prateek!
- ProKanban.org Kanban community co-founded by Daniel
- Kanban Learning Resources From ProKanban.org
- Actionable Agile Metrics for Predictability (Amazon) – Dan’s book on Kanban metrics
- The Kanban Guide – Dan is co-author of the Kanban Guide
- The Kanban Pocket Guide – Prateek is the author of the Kanban Pocket Guide
- Drunk Agile Podcast – Dan and Prateek co-host the Drunk Agile Podcast on Youtube