Thorough Review Rate (TRR)
Thorough Review Rate (TRR) measures the portion of development time on pull requests that includes substantive reviewer feedback. It reflects the quality of code reviews and helps identify whether teams are engaging in meaningful collaboration.
Calculation
minware defines substantive reviews as those containing comments or questions rather than just approvals. The rate is calculated by dividing the dev days spent on PRs with substantive reviews by total dev days across all PRs.
The metric is calculated as:
TRR = dev days on substantively reviewed PRs ÷ total dev days on PRs × 100
Goals
TRR helps teams assess whether code reviews are contributing real value. It answers questions like:
- Are pull requests receiving thoughtful, actionable feedback?
- Is the review process uncovering bugs, risks, or knowledge gaps?
- Are developers relying on the review process as a safety net or a rubber stamp?
A high TRR reflects engaged review culture. A low TRR may point to [Rubber‑Stamp Reviews] or unchecked quality risks.
Variations
TRR may also be referred to as Substantive Review Rate or Review Depth. Common ways to break down this metric include:
- By team or service, to compare feedback norms
- By PR size, to see if complexity correlates with review engagement
- By reviewer role, such as peer vs. tech lead feedback
- By frequency, e.g. per sprint, week, or quarter
Some teams also track Average Comments per PR as a supplement to TRR.
Limitations
TRR captures feedback presence, not feedback impact. A review with minor comments may meet the criteria but still miss major issues. The metric also assumes review systems log comments reliably and are linked to PRs with accurate timing.
For better insight, pair with:
Complementary Metric | Why It’s Relevant |
---|---|
PR Open to First Review Time (PRRT) | Shows whether reviews are arriving promptly |
Pull Request Size | Reveals whether larger PRs deter deep reviews |
Post PR Review Dev Day Ratio (PRR) | Measures how quickly developers resume work after receiving feedback |
Optimization
Improving TRR supports quality outcomes and cross-team learning:
- Adopt Code Review Standards that encourage reviewers to leave specific, constructive comments.
- Model quality reviews by sharing examples of helpful feedback across the team.
- Keep PRs small, reducing review fatigue and increasing engagement.
- Coach on question-based review techniques, guiding rather than dictating.
- Track review trends, using metrics like TRR to reinforce expectations and highlight review discipline gaps.
Thorough Review Rate is a safeguard against unchecked changes and a signal of engineering maturity. When teams take review seriously, they build better software and, better habits, together.