Getting Started: Scorecard

The minware scorecard sets the foundation for your team’s improvement with minware. The scorecard captures hygiene metrics that may impact the accuracy of other minware reports, and covers common software engineering best practices that drive improvement across the four pillars of software engineering: quality, efficiency, predictability, and value.

minware’s scorecard metrics reflect team processes and software engineering best practices, not individual performance. Even entry-level engineers and inexperienced teams can achieve 100% scores because the metrics check basic things like whether pull requests have a review before they are merged. The scorecard is a starting point that will set you up for further improving quality, efficiency, predictability, and value with other minware reports.

Most scorecard metrics are calculated based on time spent (either active coding time or all work time based on minware’s time model), rather than number of tickets, lines of code, or number of PRs. Using time spent allows you to focus on the items that will have the biggest impact. You can read more about minware’s unique time model in the time model documentation.

Prerequisites

Some scorecard metrics work with Git data only, but most require both Git and ticket data (and a few work with ticket data only). If any metrics show a “?” in the “Passing?” column instead of a check mark or “X”, it means insufficient data is available for that metric and the currently selected teams/contributors.

Most scorecard metrics are measured in Dev Days, which require regular commit activity to attribute work time to commits. If people are not committing daily, not using version control, or you haven’t connected all their Git data, then dev-day scorecard metrics will not reflect all work. (Note: squashes, rebases, and committing daily but waiting multiple days to push commits are all handled correctly.)

Only the following scorecard metrics will calculate work time using ticket activity if no commit activity is available for a contributor:

  • 2.1. Use of Bug Ticket Type
  • 2.2. Use of Epics
  • 2.3. Non-Epic Ticket Labels
  • 2.5. Ticket Estimates
  • 2.6. No Omnibus Tickets
  • 4.3. No Multi-Sprint Rollover
  • 4.6. Sprint Completion
  • 5.1. Project Estimation

Focus on a few metrics at a time

You will notice that there are a lot of metrics in the scorecard report. You don’t need to tackle all of them at once!

The scorecard includes metrics covering all stages and aspects of software development, but is intended for teams to only focus on improving a few metrics at a time.

After you have reviewed the scorecard, we recommend filtering it down to a small number of metrics, and then adding the selected metrics to a dashboard for ongoing monitoring.

Once you have met the targets for your selected metrics, we recommend re-evaluating and adjusting your focus metrics, while periodically checking the whole scorecard to make sure earlier metrics do not regress.

In this way you can manageably and sustainably improve across all four pillars.

Improve hygiene metrics first

The scorecard is roughly sorted by process maturity, with the top metrics focusing on code/ticket hygiene. Hygiene is important because it will impact the coverage and accuracy of later scorecard metrics and other minware reports.

We recommend first focusing on these basic hygiene metrics. You should work on improving any metrics that are below target. If any are very low (< 50%), then later scorecard metrics and other minware reports may be highly inaccurate and hygiene should be your primary focus.

  • 1.2. Use of Branches - Commits to main/master are not traceable to tickets or code reviews, so making sure commits are in branches is the first step. (Committing to main/master can also make collaboration more difficult and cause other problems.) You should also ensure that the contributors and dev days showing up in this metric are what you expect to verify there are no gaps in your commit data.
  • 1.3 Use of Pull Requests - Pull requests are not strictly necessary for tracing code to tickets because tickets can be read from branch names in addition to PR titles. However, pull requests are required for code reviews and are generally a good practice.
  • 2.4 Linking Branches to Tickets - This metric looks at how much coding time is on branches with ticket keys in the branch name or pull request title. If this metric is low, then dev time will not be directly traceable to tickets and many later metrics will lack visibility.

Next, if you plan to use certain metrics or reports, we recommend focusing on these area-specific hygiene metrics that may impact their accuracy:

  • 2.1 Use of Bug Ticket Type - This metric looks at how much work time is spent on tickets with a bug issue type. The intent is to capture whether quality issues are being reported as such, which is required for later quality metrics like bug fix vs. find rate to be accurate.
  • 2.2 Use of Epics - This metric looks at how much work time is on tickets that have an epic parent ticket. Without parent tickets, value metrics will not be able to show how much effort went into different projects.
  • 2.5 Ticket Estimates - This metric looks at how much work time (all time, not just coding) is on tickets that have a story point or time estimate set. Predictability metrics require time estimates to work properly, and estimating every ticket before work starts is generally a good practice.
  • 4.2. On-Sprint Work - This metric measures the amount of dev time on tickets that are in sprints. If people do significant work outside of sprints, that can detract from the accuracy of sprint metrics.

Scorecard columns

The scorecard has five columns:

  • Item - This is the name of the scorecard metric, and includes a help bubble explaining more about what it measures and why it’s important. When drilling down, it is the team/person/sprint/ticket/branch.
  • Total Size - This tells you how many metrics are within each category, or at the metric level, how many activity units are included in the measurement. For example, if the total size for an item is 400 days, that means that minware has identified 400 days of work across all contributors for the time frame specified. If there is very little time, the metric won’t be very accurate.
  • Target/Info - For the category level, the target is how many of the metrics within the category we recommend that you pass (which is 100% for all categories) For the metrics themselves, the target is the passing score we recommend. Individual items may have info icons that provide detailed information about why that item was passing or failing.
  • Passing? - At the category level, this tells you how many of the metrics within the category achieve a passing score. At the metric level, the column tells you how your team scores on that metric, with a check if it’s passing, and an X if it needs improvement. For individual items, this column shows whether they are passing or failing.
  • Line Graph - The line graph illustrates how your team has scored on the item over time.

Drilling down

Each item of the scorecard supports drilling down through several layers to get more detail. You can see teams, team members, all the way down to the specific tickets or PRs that comprise the data. This lets you inspect the specific pieces of work that feed into the metric calculation so you can troubleshoot and pinpoint particular areas of opportunity.