minware reports on all of the activity classification results in its scorecard report. These best practices look at branches, pull requests, tickets, sprints, reviews, projects and more to give you a comprehensive view of performance across the entire software development lifecycle. You can find a list of all the scorecard metrics here.
By looking at data for thousands of teams, minware has seen the full range of maturity in every area and how that affects team performance. We have identified criteria for classifying different types of activity based on how things work in the real world.
Unlike a lot of tools that generate noise for every warning, we realize that nothing is ever 100% perfect and trying to get to 100% is usually not worth the effort.
Instead, minware's activity classification engine uses thresholds that tolerate the occasional mistakes you still see on a high-performing team and highlights areas where improvements will have the greatest results.
minware's activity classification engine is designed to be unopinionated about processes and only flags things that violate commonly accepted best practices for version control and ticketing. Organization-specific choices like whether to use time estimates vs. story points, different Git flows, and various ticket status schemes are all acceptable to the activity classification engine.
There are limits of course. If you don't believe in version control or commit everything to master without using branches, then the activity classification engine is going to plan. It is configurable though, so you can pick and choose which checks you want to see if you're still a strict believer in story points despite not caring about version control.