All Work Time by Ticket Status (ATTS)
All Work Time by Ticket Status (ATTS) measures the total time engineers spend working on or around a task across all workflow stages, not just active development. It helps surface how much effort is spent waiting, coordinating, reviewing, or unblocking tasks throughout the lifecycle of a ticket.
ATTS is primarily a workflow efficiency metric, but it can also inform predictability and quality. When work consistently piles up in “Blocked,” “In Review,” or “Testing,” delivery timelines become harder to forecast and teams are more likely to take risky shortcuts to “catch up.”
In AI-assisted teams, ATTS helps validate that faster implementation (via copilots, codegen, or agentic AI) isn’t just shifting the bottleneck downstream into review, testing, compliance, or release readiness.
How Do You Calculate All Work Time by Ticket Status (ATTS)?
Time is attributed based on the ticket’s status history in the issue tracker and the corresponding timestamps. Each time window between status changes is mapped to a category (e.g., “In Progress,” “In Review,” “Blocked,” etc.).
It can be broken down by both ticket and aggregate views to reveal patterns.
The metric is calculated as:
ATTS = sum of time spent in each ticket status per work item or team
What Counts as “Work Time” in ATTS?
ATTS uses ticket status durations as a proxy for where time is being consumed across the workflow. In practice, this typically represents a blend of:
- Active effort
- Waiting/queueing time
- Coordination overhead
- Dependency and approval delays
If your organization uses a separate time allocation model (for example, inferring active work vs. overhead), you can also compute ATTS by attributing “work time” to a ticket and then segmenting that time by whatever status the ticket was in during those intervals.
What Time Unit Should You Use?
ATTS is often reported in hours or days. The most important thing is consistency across teams and time ranges.
In AI-enabled delivery, it can be useful to track both:
- Calendar time in status (what the business experiences end-to-end)
- Work-time-weighted views (what humans are actually spending effort on)
This avoids over-crediting “speed” improvements that come from automation running after hours while human review capacity remains unchanged.
What Statuses Should You Include?
ATTS is only as useful as your workflow states. Most teams group granular statuses into a smaller set of categories, such as:
- Backlog / Ready
- In Progress
- In Review
- Testing / Verification
- Blocked
- Done
In AI-assisted workflows, it can help to explicitly separate states like “Awaiting Human Review” vs. “Awaiting Automated Checks” so you can see whether humans or automation are the constraint.
What Is All Work Time by Ticket Status (ATTS) Used For?
ATTS helps teams quantify non-development effort and delivery overhead. It answers questions like:
- How much time are developers spending in coordination or review instead of coding?
- Are bottlenecks forming in specific workflow states?
- Is delivery time being consumed by idle work, dependencies, or approvals?
By making invisible work visible, this metric supports more balanced planning and deeper insight into workflow health.
In AI-assisted delivery, ATTS can also help answer:
- Are AI tools reducing “In Progress” time, or just shifting time into “In Review,” “Testing,” or “Blocked”?
- Are human approvals and guardrails scaling with increased AI-generated throughput?
- Which workflow stages are least “AI-augmentable” and therefore most likely to become bottlenecks?
What Are Common Variations of ATTS?
ATTS may also be referred to as Full Ticket Time Breakdown, Lifecycle Time Analysis, or Time in Status. Common breakdowns include:
- By status category, such as development, review, testing, blocked, or backlog
- By ticket type, to compare features vs. bugs or chores
- By team or contributor, to highlight coordination-heavy workloads
- By age bracket, such as new vs. stale tickets
- By project or service, to identify slow-moving areas of the organization
Some teams report percentage of time in non-dev states, which focuses on time spent outside of “In Progress” or “Coding.”
AI-era extensions teams commonly add include:
- By AI involvement, such as tickets labeled “AI-assisted” vs. “not AI-assisted” (if you tag work)
- By automation handoff, such as time waiting on bots/agents vs. time waiting on humans
- By risk tier, such as “security-sensitive” vs. “standard” work, where AI output may require additional verification
How Does AI and Agentic AI Affect ATTS?
AI tools and agentic AI workflows can change ATTS in ways that are easy to misread unless you add the right breakdowns:
- Faster production can create review queues. If AI speeds up implementation, “In Progress” time may drop while “In Review” or “Testing” grows because human attention becomes the scarce resource.
- Automation can make state changes noisier. Bots that auto-transition tickets (or create rapid reopen/close loops) can increase status churn without reflecting meaningful progress.
- AI work may be “invisible” in the issue tracker. If agents generate code, run experiments, or resolve issues without consistent ticket updates, ATTS will undercount or misattribute where time is spent.
- New states may be needed. Many teams introduce explicit states for “Awaiting Human Approval,” “Awaiting Automated Validation,” or “AI Draft Ready” to separate human waiting time from automated processing time.
Used well, ATTS becomes a practical “handoff map” between humans, AI tools, and the rest of the delivery system.
What Are the Limitations of ATTS?
ATTS shows where time is spent. Long stretches in “Blocked” or “In Review” don’t reveal the cause or urgency of the delay.
It also depends on timely ticket updates. If engineers delay status changes, the data may misrepresent where time was actually spent.
In AI-assisted workflows, ATTS can be distorted if automation changes ticket states without a consistent rule set, or if AI work happens outside the ticketing system and isn’t reflected in status changes.
To interpret this metric effectively, pair it with:
| Complementary Metric | Why It’s Relevant |
|---|---|
| Cycle Time | Tracks overall time from work start to finish—ATTS breaks that time into meaningful stages |
| Review Latency | Highlights how long work sits waiting for feedback in specific states like “In Review” |
| Work in Progress (WIP) | Provides context for multitasking or overloaded queues contributing to state transitions |
| Reopen Rate (if tracked) | Helps detect quality or automation issues that repeatedly send tickets backward in the workflow |
How Can You Improve ATTS?
Improving ATTS requires reducing idle or coordination-heavy time and unblocking critical path work more effectively.
-
Analyze time in each state. Review historical ticket data to identify stages where time accumulates unnecessarily.
-
Improve state hygiene. Ensure developers update ticket statuses consistently and accurately to maintain data integrity.
-
Automate transitions when possible. Trigger state changes based on actions in Git or CI tools to reduce tracking friction.
-
Focus on stuck tickets. Set alerts for tickets that exceed time thresholds in review, testing, or blocked states.
-
Balance planning across stages. Don’t over-optimize for dev speed at the cost of review, QA, or delivery readiness.
AI-specific tactics that often improve ATTS without sacrificing quality:
- Add downstream capacity when AI increases throughput. If implementation gets faster, invest in review, testing, and release capacity (human or automated) so queues don’t just move downstream.
- Use AI to reduce “Blocked” time. AI can help with first-pass debugging, log summarization, dependency discovery, and clarification questions—but ownership and escalation paths still matter.
- Make human-in-the-loop checkpoints explicit. Use clear states like “Awaiting Human Review/Approval” so you can see (and manage) the human bottlenecks created by agentic AI.
- Audit automated status rules. Ensure agent/bot transitions represent meaningful workflow stages, not just tool events.
ATTS helps answer a critical delivery question: “Where is our time actually going?” When tracked well, it turns ticket history into a clear blueprint for improvement.