What Is Software Delivery Friction and How Do You Fix It?

All Posts
Share this post
Share this post

What Is Software Delivery Friction?

Software delivery friction refers to anything in the development process that creates resistance and slows down the flow of work from idea to production. In practical terms, friction is the accumulation of small delays, inefficiencies, and bottlenecks that make it harder for teams to deliver value. Friction is anything that increases the time between the idea and working software.

Why does reducing friction matter? Friction directly drags down team productivity and morale. A recent empirical study found that the presence of friction can significantly hinder productivity, increase frustration, and contribute to low morale among developers. When developers spend more time waiting on processes or fighting bottlenecks than actually solving problems, they’re going to have “bad days” and feel less effective. Removing friction helps engineers stay focused on delivering features and fixes.

Not all friction is inherently bad. Some safeguards (like quality checks) intentionally add a bit of pause for good reason, but unnecessary friction is pure waste. The goal is a smooth, efficient development process that maintains quality without needless slowdowns. In the sections below, we’ll define common causes of software delivery friction and how to measure them, then explore ways to fix these issues to speed up delivery.

Common Causes of Delivery Friction

What typically creates friction in a team’s delivery workflow? Here are some of the most common culprits:

Long Pull Request Lead Times

The pull request (PR) lead time measures how long it takes a code change to go from a developer opening a pull request to it being merged and/or deployed.. When the PR Lead Time for Changes is consistently long, it’s a red flag. Long lead times mean slow delivery of code, often because work sits idle waiting for reviews, approvals, or passing tests.

Industry research uses lead time as a key proxy for delivery performance. Google’s DevOps Research and Assessment (DORA) group identifies Lead Time for Changes as one of the “four key” metrics of software delivery efficiency, noting it “reflects the efficiency of your software delivery process”. When it takes weeks for a code commit to reach production, that suggests a high-friction process with many bottlenecks. Long lead times delay value to customers and increase context-switching for developers (as they move on to other tasks and then come back to old code later).

Common reasons for long PR lead times include slow code reviews, overly large batches of changes, and manual release steps. The longer a PR stays open, the more likely it will have merge conflicts or require rework, further compounding the friction.

Too Much Work in Progress (WIP)

Another classic cause of friction is having too many concurrent tasks. When a team or developer has a high Work in Progress (WIP), it means they are juggling many pieces of work at the same time. High WIP violates a core principle of lean development: excess multitasking creates waste. Developers get interrupted and pulled between tasks, which slows everything down.

Having lots of work in progress leads to context switching and longer cycle times for each item. It’s like trying to finish 5 projects all at 80% done, instead of delivering them one by one. As more items pile up partially done, the overall flow stalls. In lean terms, flow efficiency drops because time is spent waiting or context-switching rather than completing work.

Research and industry experience strongly suggest limiting WIP to reduce friction. For instance, an initiative at Siemens Health Systems introduced strict WIP limits and saw delivery cycle time drop from 71 days to 43 days – a 42% improvement – without adding any extra people or hours. Their first-pass test success rate also jumped from 75% to 95% by focusing on fewer items at once. This real-world example shows that when teams stop starting and start finishing, they deliver faster and with higher quality.

Slow or Clogged Code Reviews

Delays in code review are one of the most visible friction points in modern software teams. If pull requests wait days for a reviewer to even look at them, that’s a major bottleneck. Review Latency is the term for how long a PR waits before the first meaningful review begins. High review latency means engineers spend a lot of time in limbo, with work stuck in the queue awaiting feedback. This stretches out lead times and frustrates developers who are eager to merge their changes.

Code review is critical for quality, but it must be efficient to avoid becoming a drag on throughput. Google’s own engineering practices advise that one business day is the maximum time it should take to respond to a code review request. When reviews are consistently slow, team velocity decreases and developers can become frustrated or try to bypass the code review process.

Several factors can cause review latency to spike such as not enough reviewers for the volume of PRs or very large PRs that are daunting to review. The result is delivery friction – new features and fixes are delayed while waiting for someone to click “Approve” or provide comments. To gauge this, teams track metrics like time-to-first-review or average review turnaround. Any bottleneck in the PR feedback cycle will directly show up as slower deliveries.

Unreviewed (Skipped) Pull Requests

On the flip side of slow reviews is another problematic pattern: merging code changes without any review at all. If a team has many No-Review PRs, it means changes are going into the codebase without the benefit of a second pair of eyes. This usually indicates a breakdown in process. Perhaps the team is under intense time pressure or lacks a proper code ownership model, leading developers to self-approve or bypass reviews to get things done.

While skipping code review might speed up individual merges in the short term, it increases friction downstream by injecting more bugs and quality problems. Peer review exists to catch issues early. In fact, classic studies have shown that formal code inspections (a rigorous form of review) can catch 65%–85% of defects in software. Forgoing reviews means many of those defects will slip through to later stages or production, where they are much costlier to fix. It’s a case of “go slow to go fast”. Not reviewing code may feel faster today, but it creates a drag on the team’s overall velocity as bugs and rework accumulate.

A high rate of unreviewed PRs is a strong signal of delivery friction. It often suggests team dysfunction or bottlenecks: perhaps reviewers are so backlogged that authors give up on waiting, or there is a lack of accountability for code quality. In any case, unreviewed changes can lead to more outages, regressions, and technical debt, which ultimately make future deliveries even slower. Healthy teams review nearly all code changes; for example, minware suggests to keep the percentage of PRs merged with no review below 5%.

Key Friction Indicators and Metrics

How can you tell if your team is experiencing software delivery friction? The good news is that many friction points can be made visible through data and metrics. Below is a summary of key indicators of delivery friction, the metrics used to track them, and what each situation might suggest:

Friction IndicatorAssociated MetricWhat It Suggests
Slow code integration (long wait from code commit to merge) PR Lead Time for Changes If lead time is high (e.g. PRs taking many days to merge), it indicates a sluggish delivery pipeline. Likely causes include blockers in development/testing, lengthy reviews, or release delays keeping code from being merged quickly.
Excess multitasking or context switching Work in Progress (WIP) High WIP (many concurrent items per developer) suggests people are spread thin. This often means inefficiency, tasks waiting on dependencies, or over-commitment.
Review bottlenecks (slow feedback on PRs) Review Latency High review latency (long time to first review/comment) points to a code review backlog. The team may not be keeping up with incoming PRs, causing work to idle in review queues and delaying delivery.

These metrics act as diagnostic indicators. By tracking them over time, engineering managers can spot where friction is increasing. For example, if you see review latency creeping up month over month, it’s a sign that your code review process is becoming a bottleneck (perhaps the team needs to allocate more time for reviews or simplify the submission flow). It’s important to look at these indicators in combination, not isolation. Often they are interrelated. For instance, a spike in WIP will likely also cause longer PR lead times and possibly more no-review merges (as everyone is too busy to review each other’s code). Using a holistic, data-driven approach helps identify the specific friction points affecting your team.

Using Data to Expose Friction (How minware Can Help)

Modern engineering analytics tools like minware make it much easier to uncover delivery friction by automatically mining development data. minware surfaces insights about bottlenecks and inefficiencies by aggregating metadata from your source control, issue trackers, CI/CD pipelines, code reviews.

Because it pulls directly from development metadata (timestamps on commits, PR events, ticket status changes, etc.), minware objectively measures the friction indicators discussed above. You don’t have to rely on gut feel or wait for complaints in retrospectives; the data will highlight trouble spots in real time. For example, minware can show you the average Review Latency for your team and even break it down by repository or reviewer. If certain projects consistently have slower review times, that pinpoints a friction area to address.

minware similarly tracks PR Lead Time for Changes across your organization, so you can identify if certain repos or features are suffering unusual delays. It can flag if developers have too many concurrent tasks by analyzing commit and ticket data for high Work in Progress. And it will call attention to trends like an uptick in No-Review PRs, signaling that the normal code review process is being circumvented.

Crucially, tools like minware don’t just present raw numbers. They help interpret them. minware’s dashboards often include benchmarks or thresholds (drawn from industry research and best practices) to contextualize your team’s metrics. For instance, if your average PR lead time is 10 days, minware will highlight that as a concern because high performing teams typically merge code in under a day. By visualizing such gaps, minware helps engineering leaders prioritize where to intervene. Instead of guessing why “things feel slow lately,” you can pinpoint that code reviews are the culprit, or that a particular team has too much WIP causing thrash.

How to Fix Software Delivery Friction

Identifying friction points is only half the battle. The real goal is to reduce or eliminate that friction and get your development process flowing smoothly. Here are some concrete strategies engineering leaders can use to fix delivery friction:

Limit Work in Progress and Batch Sizes

One of the fastest ways to improve flow is implementing WIP limits and encouraging smaller batch sizes for work. By capping how many tasks or PRs can be in progress at once, you force the team to finish work before starting new things. This improves focus and drastically cuts down wait times. Techniques like Kanban boards with explicit WIP limits, or simply a team policy of “no more than N open PRs per developer,” prevent overload. Smaller batch sizes (breaking features into smaller tickets and breaking tickets into smaller PRs) also mean reviews go faster and code integrates more continuously, keeping the pipeline moving with less friction.

Streamline the Code Review Process

To address review-related friction, consider overhauling how code reviews are done. Set clear expectations that reviews should be timely (recall Google’s one-day guideline) and make turnaround part of your team’s norms. Use tactics like reviewer rotations or dedicating certain hours of the day to review duty so that PRs get prompt attention. Automating parts of the workflow also helps. For example, use bots or rules to auto-assign reviewers the moment a PR is opened, so no time is lost finding the right reviewer. Encourage developers to send smaller, focused PRs (instead of giant code dumps) to make reviews quicker and less burdensome. The goal is a healthy review cadence where feedback flows quickly and doesn’t become a multi-day gate.

Ensure Code Doesn’t Skip Review

If you discover many PRs being merged without review, reinforce a strong review culture. This might involve updating branch protection settings so that at least one approval is required to merge to main, or setting up CI checks that flag any PR merged with zero reviewers. Emphasize to the team that peer review is essential for knowledge sharing and quality. Reviews shouldn’t be seen as optional even under deadlines. If high no-review PR counts stem from extreme time pressure, address the planning issue (e.g. adjust timelines or scope) rather than sacrificing code review. By making sure every change (even small fixes) gets at least a quick review, you prevent downstream friction from bugs and maintainability issues.

Invest in Automation and Tooling

Automated tooling can eliminate much of the manual friction in delivery. For example, a robust continuous integration/continuous delivery (CI/CD) pipeline runs tests and deployments automatically, reducing the need for developers to wait on slow manual steps. Automated test suites and static analysis catch problems early, so reviewers spend less time on nitpicks. If certain stages are slowing you down (for instance, a manual QA sign-off or deployment script), consider solutions like feature flagging or trunk-based development to integrate code continuously and decouple release from merge. Also leverage analytics and alerts by using a tool like minware to notify the team when a PR has been idle too long or when WIP exceeds a threshold. The more you can automate away repetitive friction points, the faster and smoother your delivery pipeline will be.

Continuously Improve with Data and Feedback

Finally, treat the reduction of friction as an ongoing, iterative effort. Use regular retrospectives to discuss where the team feels the process is getting bogged down, and bring data to back it up. Track your friction metrics over time. By doing so, you can see, for example, if you implement a new review policy, does Review Latency drop in the next few weeks?

Celebrate improvements (e.g. “our average PR lead time fell from 8 days to 4 days after we started pairing on reviews”) to reinforce the value of changes. Cultivate a mindset of continuous improvement so that friction doesn’t gradually creep back. Encourage team members to speak up when something is causing delays or frustration. Often, the people on the ground notice friction first. By continually monitoring metrics and listening to feedback, you can adapt and fine-tune your processes before small issues become major bottlenecks.

Conclusion

Software delivery friction is the silent killer of engineering productivity. The good news is that with the right metrics and mindset, you can make the invisible visible and systematically root out these inefficiencies. We’ve seen how long PR lead times, high WIP, slow reviews, and unreviewed code are telltale signs of friction dragging down your team. By measuring these indicators (with help from tools like minware) and taking action – you can significantly speed up delivery while still maintaining quality.