Getting Started: Sprints & Predictability
Predictability is one of the four main areas of minware reporting. The goal is to help teams consistently deliver planned new work.
The predictability reports covered in this guide are based on sprints.
The sprint reports provide visibility into common agile process goals, including sprint completion vs. commitment, minimizing scope creep, minimizing ticket rollover, and avoiding under-the-radar work that occurs outside of sprints.
These reports provide much deeper visibility into sprint performance than you can find in reports from ticketing tools like Jira.
Sprint Trends
The first sprint report is Sprint Trends, which you can find in the main “Dashboards” menu for your organization. The Sprint Trends report gives you visibility into sprint effectiveness over a longer time period so you can see how you are improving from sprint to sprint.
Here are some notes about this report:
- Tickets that were already done when the sprint started are excluded from all metrics in this report, but you can find a list of those tickets at the bottom.
- For sprints that are in progress, the metrics will look at the current story point estimate for tickets and whether they are currently in the sprint rather than computing those things as of the sprint end date.
- If you don’t use story points, you can derive estimate values from another field by changing the “Ticket Estimate Units” parameter. The charts will still say “points” but those values will be whatever you specify in the parameter.
- If you want to change the definition of done to something other than a “Done” status category from your ticketing system (e.g., if you want to consider tickets with an in-progress status of “Waiting Release” to be done), you can click on “Parameters” and edit the “Ticket Status - Done” parameter.
This section provides an overview of how each metric is calculated and how to use it to improve predictability.
Ticket Hygiene and Sprint Traceability
The first section of the Sprint Trends report includes a scorecard of best practices that are important for ensuring the accuracy of later sprint metrics in this report. If they are low, we recommend focusing on improving these scores as a first step toward predictability.
You can click on any of the metric names to drill down into the metrics by team, sprint, individual, and ticket/branch to see specific items that are affecting the score.
The three metrics in this section are:
- 2.4 Linking Branches to Tickets - This metric looks at how many dev days (measured by minware’s time model are on branches that have links to tickets in the branch name or pull request title. The reason this metric is important is that it is difficult to tell whether untraceable dev work is associated with a ticket in a sprint, or whether it is under-the-radar work that detracts from sprint metric accuracy.
- 2.5 Ticket Estimates - This check measures how much work time (both coding and non-coding) is on tickets that don’t have an estimate set. Estimates are required for later sprint metrics based on story points, so significant unestimated work will diminish the effectiveness of later metrics in this report.
- 4.2 On-Sprint Work - This metric looks at how much dev time that is passing the 2.4 check is on a ticket in an active sprint. Under-the-radar work won’t show up in any sprint metrics and may interfere with completing sprint goals.
- Note: If 2.4 is low, this metric may not reflect most work, so it is important to get 2.4 passing before focusing on this metric.
Points Completed/Rolled Over vs. Commitment
This section showcases the main sprint metric that most people use in an agile process: amount of completed story points (or other estimate units) vs. the commitment at the start of the sprint.
The chart shows tickets that were in the sprint when it ended (not including removed tickets) measured by story points and broken down by whether or not those tickets were in a done status as of the end of the sprint.
The ending points are compared to a 100% line based on the story point estimate as of the start of the sprint for all tickets that were in the sprint at that time.
You can click on an individual bar to show a list of all the tickets that were in that status for the given sprint.
Sprint List and Detail Links
The next section provides links to Sprint Detail reports for each of the individual sprints that are shown in the Sprint Trends report based on your time range and team selection.
We recommend using the sprint detail report when zooming in a particular sprint. It can be particularly useful for sprint retrospectives.
The Sprint Detail report is covered in a later section.
Average Ticket Cycle Time Days (by Point Estimate)
The next chart shows the most important efficiency metric for tickets in a sprint: cycle time.
Cycle time measures the time from when a ticket was first moved to an in-progress status until it was first moved to a done status. (Tickets moved directly from to-do to done are not counted here and not included in the averages).
Cycle time is important because it represents the total amount of calendar time a ticket was in progress. Ideally, tickets with smaller estimates that have less work would also have lower cycle times. However, waiting time in the workflow from things like review or QA hand-offs can increase cycle time even for small tickets.
While long cycle times do not directly affect velocity because velocity only counts what was completed at the end of the sprint, long cycle times cause context switching, which can erode overall velocity in an indirect way.
The chart provides a breakdown of cycle times by story point estimate so that you can see whether smaller tickets do actually take less calendar time as expected.
The cycle time averages will also show you when tickets are taking multiple sprints to complete if the averages are longer than the length of a sprint.
You can click on the individual bars to see a list of tickets with individual cycle times that comprise the average.
Points from Added/Removed Tickets vs. Commitment
One of the primary goals of a sprint process is to provide transparency and predictability for stakeholders who are depending on tickets in the original sprint commitment to be done at the end of the sprint.
Significant amounts of scope creep from tickets added to the sprint after it starts can interfere with this predictability goal.
This chart shows you the total number of story points as of the start of the sprint for tickets in the original commitment that were removed before the end (not the estimate at time of removal, which may be different), and the total number of story points measured as of the end of the sprint (not when the tickets were added, which may be different) for tickets that were in the sprint when it ended but not at the start. Tickets that were added and removed are not counted.
If you see a significant amount of work that was removed, that indicates a low level of predictability. We recommend clicking on the bar to drill down into the specific tickets that were added to see why the scope creep occurred, which may be due to problems with insufficient planning or interruptions from quality issues.
If there is a lot of work added but not removed, then that may be okay depending on the nature of the team’s responsibilities. This indicates that a significant portion of work completed by the team was not planned at the start. In this case, we recommend looking at the tickets added to see whether they could have been added to the sprint at the start with better planning (like getting further advanced notice from outside stakeholders), or whether they were inherently unpredictable (like responding to an outage or customer request).
Points from Estimate Increases/Decreases vs. Commitment
The next chart shows the total amount of estimate increases and decreases for tickets that were in the sprint both at the start and end. The increase or decrease amount is measured by comparing the point estimate as of the start of the sprint to the point estimate as of the end.
Large swings in ticket estimates may indicate a number of underlying problems. They may be caused by lack of up-front planning to capture functional and technical requirements, or they may be caused by inherent unpredictability driven by issues with code quality or technical debt.
You can click on the individual bars to see a list of all tickets in the sprint, and you can click on the “Point Estimate Increases” and “Point Estimate Decreases" columns to sort tickets with increases or decreases to the top.
Points Completed by Assignee at End
The final chart in this report shows the number of completed story points over time by assignee as of the end of the sprint.
It is important to note that the ending assignee may not reflect the person who did most of the work on a ticket if there are multiple collaborators or the ticket goes through multiple steps like QA and review. Senior team members that spend a lot of time helping other people or planning tickets may also have lower amounts of completed points.
So, these numbers should only be treated as a rough guide, and managers should consider the team’s workflow and individual responsibilities when interpreting this data.
Sprint Detail
You can find the Sprint Detail report by clicking on the link for an individual sprint under the Sprint List and Detail Links section of the Sprint Trends report.
The goal of the Sprint Detail report is to provide full visibility into everything that happened during a sprint in support of a detailed sprint analysis like typically occurs during a sprint retrospective.
In this report, you can hover over the “?” icon for any chart to get detailed information about how the data was calculated.
Here are some important notes about this report (these are the same as for the Sprint Trends report):
- Tickets that were already done when the sprint started are excluded from all metrics in this report, but you can find a list of those tickets at the bottom.
- For sprints that are in progress, the metrics will look at the current story point estimate for tickets and whether they are currently in the sprint rather than computing those things as of the sprint end date.
- If you don’t use story points, you can derive estimate values from another field by changing the “Ticket Estimate Units” parameter. The charts will still say “points” but those values will be whatever you specify in the parameter.
- If you want to change the definition of done to something other than a “Done” status category from your ticketing system (e.g., if you want to consider tickets with an in-progress status of “Waiting Release” to be done), you can click on “Parameters” and edit the “Ticket Status - Done” parameter.
This section summarizes the metrics in the Sprint Detail report. Please see the Sprint Trends section above for details about why these metrics are important and how to use them.
Sprint Summary
This table shows basic information about the sprint, including its start date, end date, completion vs. commitment, and average cycle time of all tickets.
Point Summary
This table summarizes how points were added or removed between the start and end of the sprint, as well as how many tickets were completed or rolled over at the end.
Contributor Summary
This table lists how many points were completed by assignee as of the end of the sprint. As noted above in the Points Completed by Assignee at End section, this may not reflect the person who did most of the work if there are multiple collaborators, so it is important to consider the team’s workflow and individual roles when looking at these numbers.
Points Completed by Epic
This pie chart helps visualize progress on epics during the sprint by breaking down completed points according to ticket epic (including the epic of the parent ticket for subtasks).
Points Completed by Issue Type
This pie chart breaks down completed points by their ticket’s respective issue type field. It can be useful for seeing how much work went into bug fixes or tech debt vs. new feature work.
Ending Sprint Ticket Detail
This table lists all tickets that were in the sprint, only excluding tickets that were done at the start or added after the start and removed before the end. It provides a detailed view of estimate changes, additions, removals, completed vs. rollover, and cycle time for the individual ticket.
Note: cycle times shown here are not limited to the sprint time frame, so they may start before the sprint starts or end after it ends. Only the first in-progress status change and first done status change are counted, so they also do not capture cycle time extensions from tickets that were reopened. Completed tickets also may not have a cycle time if they were moved directly from a to-do status to a done status.
Tickets Added to Sprint
This table shows tickets that were in the sprint when it ended, but not when it started, along with their story point estimate as of the end of the sprint
Tickets Removed from Sprint
This table lists only tickets that were in the sprint when it started but not when it ended with their story point estimate as of the start of the sprint. Status at End shows the ticket status on the sprint end date, not the at the time of removal
Tickets Rolled Over at End of Sprint
This section lists tickets that were in the sprint when it ended (not removed) but not complete. It shows the status as of the end of the sprint and the total number of sprints the ticket was in, including the current sprint and those before or after.
Ticket Points Changed During Sprint
This chart shows the starting and ending points for tickets that were in the sprint at the start and end but had their point estimate change during the sprint.
Tickets Already Done at Start (Excluded from Other Metrics)
The last section of this report lists tickets that were in the sprint when it started, but already in a done status. These tickets are excluded from all the other metrics and sections in the report.