As engineering leaders, we’re constantly balancing the need to deliver value quickly with the need to maintain a healthy, sustainable team environment. Two prominent frameworks have emerged to help us measure and optimize our teams: DORA (DevOps Research and Assessment) and SPACE.
But which one should you choose? Or should you use both?
Let me walk you through realistic scenarios that illustrate when each framework shines, its limitations, and how they can work together to provide a complete picture of your team’s health and performance.
Understanding the Frameworks
DORA: The Delivery Performance Lens
DORA focuses on four key metrics across two categories that predict software delivery performance:
Throughput:
- Lead Time for Changes: Time from code commit to production
- Deployment Frequency: How often you deploy to production
Stability:
- Mean Time to Recovery (MTTR): Time to recover from production failures
- Change Failure Rate: Percentage of deployments causing production issues
SPACE: The Broader Productivity View
SPACE takes a broader approach across five dimensions:
- Satisfaction and Well-being: Developer happiness and fulfillment
- Performance: Outcomes and business impact
- Activity: Quantity of work completed
- Communication and Collaboration: Team interaction quality
- Efficiency and Flow: Minimizing interruptions and maintaining focus
Scenario 1: The High-Performing Team with Hidden Problems
The Situation: Team Alpha has impressive DORA metrics. They deploy 15 times per day, with a two-hour lead time and 99.9% uptime. Leadership loves the numbers. But morale feels off.
What DORA Shows: Elite performance across all four metrics. The team is a model for the rest of the organization.
What DORA Misses: During one-on-ones, you discover developers are working 60+ hour weeks to maintain these metrics. Two senior engineers are actively job hunting, and the team’s code review discussions have become terse and defensive.
Left unchecked, these problems would eventually tank your DORA metrics too. When senior talent walks out and burned-out developers start cutting corners, code quality suffers and deployment failures climb.
But by then, you’re in damage control mode. SPACE catches these warning signs early, before they become delivery problems you can measure in production.
Where SPACE Adds Value:
- Satisfaction surveys reveal burnout and low job satisfaction scores
- Communication metrics show declining collaboration quality
- Flow metrics indicate constant context switching and interruptions
The Insight: DORA metrics can be maintained through unsustainable practices. SPACE helps you identify when optimization comes at the cost of team health.
Action Taken: Implement sustainable practices: reduce deployment pressure, increase team size, and focus on developer well-being alongside delivery metrics.
Scenario 2: The Struggling Startup Team
The Situation: Team Delta at a growing startup is missing deadlines, customers are complaining about bugs, and morale is low. Quickly identify and fix the biggest problems.
What SPACE Shows: Low satisfaction, poor communication scores, and lots of activity but little meaningful output. The comprehensive view is overwhelming when you need quick wins.
What DORA Reveals:
- Lead time: three weeks (mostly waiting for manual testing)
- Deployment frequency: Once per month
- MTTR: eight hours (no monitoring or alerting)
- Change failure rate: 25% (no automated testing)
The Insight: DORA’s focused metrics immediately pinpoint bottlenecks in the delivery pipeline. The team needs operational improvements before addressing broader productivity concerns.
Action Taken: Implement automated testing, set up monitoring, and establish a CI/CD pipeline. Once delivery stabilizes, introduce SPACE metrics to ensure sustainable growth.
Scenario 3: The Research-Heavy Team
The Situation: Team Gamma works on machine learning infrastructure with long research cycles, experimental features, and uncertain outcomes. Traditional velocity metrics don’t capture their value.
What DORA Shows: Poor metrics across the board: infrequent deployments, long lead times, and difficulty measuring “failures” in experimental work.
What SPACE Captures:
- Performance: Business impact of research discoveries and architectural improvements
- Satisfaction: High engagement with challenging technical problems
- Activity: Patents filed, papers published, knowledge sharing sessions
- Collaboration: Cross-functional partnerships with data science teams
The Insight: DORA’s focus on delivery frequency doesn’t align with research-oriented work patterns. SPACE provides a more appropriate framework for measuring diverse contributions.
Action Taken: Use SPACE as the primary framework, but adapt DORA metrics for infrastructure deployments and tooling releases.
Scenario 4: The Scaling Organization
The Situation: Your company is growing from three teams to 15 teams. You need consistent metrics across different contexts. Some teams build customer features, others maintain infrastructure, and still others focus on developer tools.
The Challenge: Different teams have different work patterns and success criteria.
DORA Approach: Provides standardized metrics that work across most teams for easy comparison and benchmarking.
SPACE Approach: Allows customization for different team contexts, but makes cross-team comparison difficult.
The Hybrid Solution:
- DORA as the baseline: All teams track the four key metrics with context-appropriate definitions
- SPACE for specialization: Teams add relevant dimensions based on their specific challenges and goals
Quarterly reviews: Combine quantitative DORA trends with qualitative SPACE insights
When to Choose Which Framework
Start with DORA if:
- Delivery capabilities require quick improvement
- Your team has clear delivery bottlenecks
- Leadership requires concrete, comparable metrics
- You’re establishing measurement discipline for the first time
- Your work fits traditional feature delivery patterns
Lead with SPACE if:
- Your team shows signs of burnout or low engagement
- You have diverse work types (research, maintenance, platform)
- Team collaboration and communication need improvement
- You’ve optimized delivery but plateaued on business impact
- You have resources for thorough measurement
Use both when:
- You want a complete picture of team health and performance
- You’re scaling and need both standardization and flexibility
- You’ve learned one framework and want to expand
- You’re dealing with complex organizational dynamics
Implementation Strategy: A Practical Approach
Phase 1: Foundation (Months 1–3)
Start with two DORA metrics most relevant to your throughput challenges. For most teams, this means lead time and deployment frequency.
Phase 2: Stabilization (Months 4–6)
Add the remaining DORA metrics and establish consistent measurement practices. Begin quarterly developer satisfaction surveys.
Phase 3: Enhancement (Months 7–12)
Layer in relevant SPACE dimensions as team needs emerge. Focus on metrics that complement rather than duplicate your DORA insights.
Phase 4: Optimization (Ongoing)
Use data from both frameworks to make informed decisions about team investments, process changes, and organizational improvements.
Common Pitfalls and How to Avoid Them
The Metric Gaming Trap
Problem: Teams optimize for metrics rather than outcomes.
Solution: Use metrics for system improvement, not individual evaluation. Regularly review whether your metrics still align with your goals.
The Analysis Paralysis Problem
Problem: Too many metrics lead to decision gridlock.
Solution: Choose three to five key metrics as your North Star, and review others monthly, not daily.
The Context Ignorance Issue
Problem: Applying the same metrics to all teams regardless of their work patterns.
Solution: Customize metric definitions and targets based on team context while maintaining comparable frameworks.
Measuring Success: Outside the Numbers
The true test of any metrics framework is whether it helps you make better decisions about your team and your products. The numbers themselves are secondary.
Here are the questions I ask quarterly:
- Are our metrics helping us identify problems before they become critical?
- Do our developers trust that metrics are used to improve systems, not evaluate individuals?
- Are we making data-informed decisions about team investments and process changes?
- Do our metrics reflect what leadership and customers actually care about?
The Future: Intelligent Metrics
Looking ahead, the most effective approach combines the best of both frameworks:
- DORA’s objectivity with SPACE’s breadth
- Quantitative precision with qualitative insights
- Delivery focus with human-centered measurement
We’re already seeing tools that automatically collect DORA metrics while providing SPACE-style insights through sentiment analysis and collaboration pattern recognition.
Choose Your Adventure
As engineering leaders, we don’t have to choose between delivery excellence and team well-being. DORA gives us the operational discipline to deliver reliably; SPACE ensures we’re building sustainable, engaging work environments.
Start with DORA to establish measurement discipline and improve your delivery capabilities. Once you have that base in place, thoughtfully add SPACE dimensions to address your team’s specific challenges.
Focus on building systems that consistently deliver value while keeping your team engaged, growing, and excited about their work. Don’t aim to maximize any single metric.
The best metrics framework is the one that helps your team improve in ways that matter to your users, your business, and your people. Sometimes that’s DORA, sometimes it’s SPACE, and often it’s a thoughtful combination of both.
What metrics frameworks have worked best for your teams? I’d love to hear about your experiences and the creative ways you’ve adapted these frameworks to your context.