Decoding the 2025 DORA Report: What AI-Assisted Development Means for Engineering Excellence

Published:

Ninety per cent of developers now use AI at work, up from 76% just a year ago. Eighty per cent report productivity gains. AI assistants generate code in minutes that would have taken hours to write manually. Individual developers feel more effective than ever.

Yet something doesn’t add up. Instability metrics remain stubbornly problematic. Burnout levels haven’t budged. Many engineering leaders report that individual productivity gains aren’t translating into organisational delivery improvements. The code arrives faster, but deployments don’t. Developers work more effectively, but delivery speed remains unchanged.

This paradox sits at the heart of the 2025 DORA AI Report, based on survey data from 5,000 engineering professionals globally. Rather than celebrating AI’s inevitable transformation of software development, the report reveals something more nuanced and arguably more valuable: AI amplifies your organisation. If your systems work well, AI multiplies those advantages. If your systems have dysfunction, AI accelerates those problems too.

The question for engineering leaders isn’t “should we adopt AI?” That ship has sailed. The median developer now has 16 months of AI experience and uses AI tools for approximately 2 hours daily (a quarter of their workday). The real question is: what do we want AI to amplify?

This matters because teams are making critical decisions right now about how to integrate AI-assisted development into their workflows. Some organisations are seeing AI transform their delivery capabilities. Others are seeing it create more problems than it solves. The difference isn’t the AI tools themselves. It’s the foundational capabilities that determine whether AI helps or harms.

This article isn’t a summary of the DORA report. It’s an interpretation focused on extracting practical insights for engineering teams navigating AI adoption. What separates teams that thrive with AI from those that struggle? Why do some organisations see AI as a force multiplier whilst others see marginal gains? What foundational capabilities determine AI’s impact?

The data provides clear answers. More importantly, it provides a roadmap for transforming individual productivity improvements into genuine organisational advantage.

AI as Amplifier, Not Solution

The DORA report’s most important finding isn’t about AI capabilities or adoption rates. It’s about what AI does to organisations. The data reveals something that should fundamentally reshape how engineering leaders think about AI adoption: AI is an amplifier, not a solution.

The report identified seven distinct team patterns:

  1. Harmonious high-achievers (20%): Excel across all dimensions with low burnout and high stability
  2. Pragmatic performers (20%): Strong delivery with average well-being
  3. Constrained by process (17%): Good stability but high burnout and friction
  4. Stable and methodical (15%): Deliberate pace, high quality, low throughput
  5. Legacy bottleneck (11%): Constant reactivity, high instability
  6. Foundational challenges (10%): Survival mode with gaps across all areas
  7. High impact, low cadence (7%): Strong product performance but low throughput and high instability

Here’s what matters: AI doesn’t move teams between these profiles. It amplifies whichever profile they already inhabit.

High-performing organisations with strong foundations see AI multiply their advantages. They adopt AI tools and watch individual productivity gains flow through to team delivery improvements and organisational performance. AI helps them deploy faster whilst maintaining stability. The amplification effect is overwhelmingly positive.

Struggling organisations see AI accelerate their existing dysfunctions. They adopt the same AI tools and productivity gains get absorbed by systemic constraints. Developers generate code faster, but deployment pipelines, code review, and testing infrastructure haven’t evolved. The bottleneck shifts downstream, work-in-progress accumulates, and instability increases.

Perhaps the most striking finding: 40% of teams achieve both high throughput and high stability. This definitively disproves the speed versus stability trade-off that many organisations still treat as inevitable.

You’ve likely experienced this tension yourself. Individual developers report feeling more productive. They’re generating more code, solving problems faster. Yet when you look at team-level delivery metrics, the improvement is marginal or non-existent. The code arrives quickly but takes days to get through review, testing, and deployment.

This isn’t an AI problem. It’s a systems problem that AI is exposing. If your deployment process is slow and manual, AI won’t fix it. AI will just help you generate code faster that sits waiting for manual deployment. If code review is already a bottleneck, AI will make it worse. If testing practices are weak, AI will help you ship undertested code at higher velocity.

The uncomfortable truth: AI forces a reckoning with your delivery system’s actual constraints. Those constraints existed before AI. They were just less visible when code generation was slower.

The Seven Foundational Capabilities

The report introduces the DORA AI Capabilities Model: seven foundational capabilities that determine whether AI adoption helps or harms your organisation. These aren’t aspirational “best practices” disconnected from measurable outcomes. They’re capabilities that the data shows directly amplify AI’s positive effects whilst reducing its negative impacts.

Think of these as prerequisites, not nice-to-haves. The teams thriving with AI have built these capabilities first. The teams struggling with AI are trying to use AI to compensate for missing foundations.

1. Clear and Communicated AI Stance

Organisations with a clear, communicated AI policy see amplified positive effects on both throughput and individual effectiveness. Teams know which tools are permitted, what usage expectations exist, and how AI fits into their workflow. The clarity itself matters more than whether the policy is permissive or restrictive.

Without this foundation, teams face uncertainty that creates friction:

  • Developers don’t know if they should use AI for security-sensitive code
  • Managers don’t know how to evaluate AI-assisted work
  • Legal and compliance teams lack frameworks for assessing risk
  • Ambiguity slows everything down

Practically, this means making expectations explicit. Document which AI tools your organisation permits. Clarify what types of work are appropriate for AI assistance. Communicate the policy to all staff, not just engineering. Support experimentation within clear boundaries.

The data shows that clarity unlocks benefits across multiple dimensions. When teams know where they stand, they can optimise their AI usage for those conditions instead of navigating constant uncertainty.

2. Healthy Data Ecosystems

Organisations with quality, accessible, unified data see dramatically amplified AI impact on organisational performance. This validates the oft-repeated truism “AI models are only as good as the data” at the organisational level, but the mechanism is subtler than you might expect.

Teams with messy data ecosystems face a double problem. Their AI tools provide only generic assistance because they can’t access context-specific information. Meanwhile, developers spend time cleaning and reconciling data instead of solving problems. The productivity gains from AI assistance get consumed by data wrangling overhead before they can translate into delivery improvements.

Healthy data ecosystems enable something fundamentally different. When AI can access your architecture documentation, understand your data models, and reference your coding standards, its suggestions become dramatically more relevant. When developers can trust that data is accurate and accessible, they spend less time verifying and more time building.

This isn’t primarily about AI training data. It’s about the operational data infrastructure your organisation depends on. Three questions reveal your current state:

  • Do developers have easy access to the information they need?
  • Is data quality high enough that decisions can be made confidently?
  • Are data sources unified or fragmented across disconnected systems?

Investment in data infrastructure compounds when AI tools can leverage that foundation. Conversely, poor data infrastructure limits AI to surface-level assistance that provides marginal value.

3. AI-Accessible Internal Data

Related but distinct from healthy data ecosystems, this capability focuses specifically on connecting AI tools to your internal context. Generic AI assistance has limited value compared to AI that understands your specific codebase, your architectural decisions, your team’s conventions, and your domain model.

The teams seeing the strongest benefits from AI have integrated their AI tools with internal documentation, codebases, and knowledge repositories. Their AI assistants don’t just know general programming patterns; they know this organisation’s patterns. They don’t suggest generic solutions; they suggest solutions consistent with existing architectural decisions.

Practically, this means going beyond using AI tools as standalone assistants. It means providing AI tools with access to your internal wikis, architecture decision records, API documentation, and code repositories. It means investing in integration rather than treating AI as purely external.

Many organisations hesitate here, worried about exposing proprietary information to AI systems. That’s a legitimate concern that requires thoughtful security and privacy decisions. But the data shows that organisations finding ways to safely provide internal context see dramatically better results than those keeping AI at arm’s length.

4. Strong Version Control Practices

Frequent commits amplify individual effectiveness with AI. Strong rollback capabilities amplify team performance. Version control acts as a psychological safety net that enables confident experimentation.

This finding makes intuitive sense once you consider what AI changes about development workflow. AI enables rapid generation of code that requires iterative refinement. Developers try approaches, evaluate results, and adjust. Without strong version control practices, this experimentation becomes risky. Developers worry about breaking working code or losing track of what they’ve tried.

Strong version control practices change the risk calculus:

  • Frequent commits create checkpoints developers can return to confidently
  • Comprehensive rollback capabilities mean experiments that don’t work out aren’t disasters
  • Good branching strategies enable trying multiple approaches in parallel without interference

The implication is straightforward but often overlooked: strengthen your version control practices before AI accelerates your change volume. Teams that maintain sloppy commit discipline or poor branching hygiene will struggle when AI increases the rate of change.

5. Working in Small Batches

Teams working in small batches show slightly reduced individual effectiveness from AI, which initially seems counterintuitive. Why would a good practice reduce AI’s benefits? The answer reveals something important about what AI actually optimises for.

AI excels at generating large volumes of code quickly. This creates perceptions of high individual effectiveness. Developers see functions materialise, feel productive, and report gains. But raw code generation volume isn’t what drives organisational outcomes.

Teams working in small batches prioritise outcomes over output. They make smaller changes more frequently. They integrate work continuously. They get feedback quickly and adjust. These practices reduce friction and improve product performance, but they constrain AI’s ability to generate large code volumes in single sessions.

The trade-off is worth making. Slightly reduced individual effectiveness scores in exchange for meaningfully better product performance and reduced friction. The data shows that teams working in small batches get better outcomes from AI, even if individual developers feel slightly less productive.

Practically, this means resisting AI’s temptation to generate large chunks of code. Break work into smaller pieces. Make changes incrementally. Integrate frequently. Get feedback early. Don’t let AI’s ability to generate code quickly seduce you into abandoning the small-batch practices that actually drive quality and velocity.

6. User-Centric Focus

This is perhaps the most striking finding in the entire report: without a user-centric focus, AI adoption has a negative impact on team performance. Not neutral. Not marginal. Negative.

AI can actively harm teams that don’t centre user needs. Those same AI tools, in the hands of teams with strong user-centric focus, produce exceptionally strong positive effects. The difference isn’t the tools. It’s whether teams maintain focus on solving actual user problems.

The mechanism isn’t mysterious. AI makes it easier to build things. It lowers the cost of implementation. This is valuable when you’re building the right things. It’s destructive when you’re building the wrong things. User-centric focus acts as the guidance system that keeps AI-accelerated development pointed at genuine value.

Teams without this focus find AI enables them to build more features that users don’t want, faster. They accumulate complexity that doesn’t serve user needs. They optimise for code generation velocity rather than user outcome achievement. The amplification effect works against them.

Teams with strong user-centric focus use AI to build solutions to actual user problems more quickly. They validate user needs before accelerating implementation. They use user feedback to guide AI-assisted refinement. The amplification effect works for them.

The practical implication is non-negotiable: establish user-centric practices before scaling AI adoption. Use user needs as the North Star for all AI-assisted development. Incorporate rich user understanding into roadmaps and prioritisation. Don’t let AI’s speed obscure the fundamental question of whether you’re building things users actually need.

7. Quality Internal Platforms

AI has negligible effect on organisational performance when platform quality is low. It has strong positive effects when platform quality is high. Internal platforms aren’t just supporting infrastructure; they’re the prerequisite that determines whether AI provides organisational value at all.

This finding should reshape how engineering leaders think about platform engineering. Ninety per cent of organisations have adopted platform engineering, but platform quality varies dramatically.

High-quality platforms:

  • Enable fast feedback loops
  • Provide integration points for AI tools
  • Abstract complexity so AI focuses on business logic
  • Support experimentation with safety nets

Low-quality platforms:

  • Create friction that consumes productivity gains
  • Force developers to work around platform limitations
  • Bottleneck deployment, testing, and integration despite faster code generation

The data shows platform quality as the strategic leverage point for AI adoption. Reliability and security serve as baseline expectations. User experience features like clear feedback on tasks, straightforward workflows, and effective automation differentiate high-quality platforms. The ability to integrate AI tools matters increasingly. Most importantly, platforms should make the right thing easy and the wrong thing hard.

Invest in platforms before expecting AI to provide organisational benefits. Treat your platform as an internal product with developer customers. Focus on holistic developer experience, not just feature checklists. Prioritise clear feedback mechanisms, which the data shows as most correlated with positive developer experience.

How These Capabilities Work Together

These seven capabilities form a system. Each amplifies the others. Clear AI policy makes platform investments more effective. Healthy data ecosystems enhance AI-accessible internal context. Strong version control practices support working in small batches. User-centric focus guides platform development. They work together to create the foundation that determines whether AI helps or harms.

The teams thriving with AI have built most or all of these capabilities. The teams struggling with AI are missing several. The path forward isn’t better AI tools. It’s building these foundations so AI can safely amplify your delivery capabilities.

The Speed-Stability Paradox

The 2024 DORA report showed AI adoption associated with reduced throughput. A year later, that’s changed. The 2025 data shows AI now has a positive association with software delivery throughput. Teams have adapted. They’ve learned how to use AI to accelerate delivery.

Yet instability persists:

  • Only 8.5% achieve 0-2% change failure rate (62.2% experience 8-16% or higher)
  • Only 6.9% achieve 0-2% rework rate (52.6% experience 8-16% or higher)
  • AI continues to increase software delivery instability

This creates a paradox worth examining. Teams have successfully adapted AI to improve speed. They haven’t successfully adapted their systems to maintain stability at AI-accelerated speeds.

What’s Happening

What’s happening is straightforward once you look at the incentives. Speed improvements are immediately visible and celebrated. A feature that used to take a week now takes two days. Stakeholders notice. Developers feel effective. The feedback loop reinforces AI adoption for speed.

Instability problems manifest later and diffuse. A change that introduces a subtle bug doesn’t fail immediately. It passes initial testing. It gets deployed. Then it fails in production under specific conditions that weren’t anticipated. The connection between AI-accelerated development and instability issues isn’t as direct or immediate as the connection between AI and speed improvements.

Teams naturally optimise for what they measure and reward. Most organisations measure and reward delivery speed more than stability. So teams adapt AI to improve speed without equivalent investment in adapting their stability practices.

The Testing Disconnect

The testing data reveals this pattern clearly. Sixty-two per cent of respondents use AI for creating test cases. Fifty-nine per cent perceive that AI has positively impacted code quality. These numbers suggest teams are trying to maintain quality practices whilst accelerating development.

Yet the instability metrics tell a different story. The disconnect suggests something important: local code quality may be improving whilst system-level quality suffers. AI might help write better individual functions with better error handling and edge case coverage. But those well-written functions integrate into systems that haven’t evolved to safely handle AI-accelerated change volume.

Gene Kim’s foreword to the report provides a useful framework for understanding this through control theory. The Nyquist stability criterion states that for a system to remain stable, feedback loops must match the system’s rate of change. AI accelerates code generation. Testing and validation infrastructure hasn’t kept pace. The system oscillates between states (instability) because feedback doesn’t match the speed of change.

This explains why “trust but verify” remains an incomplete solution without infrastructure investment. Teams correctly maintain scepticism toward AI-generated code. They verify output before integrating it. But verification infrastructure designed for human-speed development struggles with AI-accelerated volume.

Consider what happens in practice. A developer uses AI to generate a new feature. They review the code carefully, catching several issues. They write tests for the happy path and obvious edge cases. The code passes review and testing. It deploys successfully. Then it fails in production because of an interaction with another recently-changed system that neither the developer nor the tests anticipated.

The individual verification was thorough. The system-level verification wasn’t adequate for the accelerated pace of change across multiple teams using AI simultaneously. The testing infrastructure wasn’t designed to catch integration issues emerging from high-velocity, multi-team AI-assisted development.

The Opportunity

The opportunity here is significant. Forty per cent of teams achieve both high speed and high stability. This proves it’s possible. These teams haven’t abandoned AI for speed. They’ve invested in the verification infrastructure needed to maintain stability at AI-accelerated speeds.

What high-performers do differently:

  • Treat testing infrastructure as strategic investment that enables speed, not overhead that slows it
  • Evolve testing practices to match AI-accelerated development
  • Build fast feedback loops that keep pace with AI-generated code volume
  • Invest in automated verification that catches system-level integration issues, not just function-level correctness

The path forward isn’t slowing down AI-assisted development. It’s catching stability practices up to speed practices. Fast feedback loops need to match AI-accelerated generation. Verification infrastructure needs to handle higher change volume. Testing practices need to evolve for the AI era.

Teams that solve this paradox will compound their advantages. They’ll maintain the speed gains AI enables whilst avoiding the instability tax that limits others.

Platform Quality and Value Stream Management

Two capabilities deserve special attention for how they connect individual productivity to organisational performance: platform quality and value stream management. These aren’t just additional items on the capabilities list. They’re the infrastructure and practice that determine whether AI’s benefits flow through your entire delivery system or get absorbed by constraints.

Platform quality determines whether AI has organisational impact at all. Value stream management ensures that impact flows to the right places. Together, they transform AI from a tool that makes individuals more productive into a force that makes organisations more effective.

Why Platform Quality Is the Strategic Leverage Point

The data on platform quality is unambiguous. AI has negligible effect on organisational performance when platform quality is low. It has strong positive effects when platform quality is high. This isn’t a marginal difference. It’s the difference between AI providing organisational value or not.

Think about what this means practically. Your developers adopt AI tools. They report feeling more productive. They generate code faster, debug more efficiently, solve problems more quickly. All of that individual effectiveness exists regardless of platform quality. But whether that effectiveness translates into organisational outcomes depends entirely on your platform.

High-quality platforms:

  • Enable fast feedback loops (Gene Kim’s control theory emphasis)
  • Provide clear feedback on tasks (most correlated with positive developer experience)
  • Offer integration points where AI tools connect to internal context
  • Abstract complexity so AI-assisted developers focus on business logic
  • Support safe experimentation through reliable rollback, comprehensive monitoring, automated verification

Low-quality platforms:

  • Create friction that consumes productivity gains
  • Developers generate code faster, then wait hours for deployment pipelines
  • AI helps write features quickly, then features sit blocked by manual infrastructure provisioning
  • Code quality improves locally, but integration testing infrastructure can’t keep pace

The platform becomes the constraint that determines system throughput. AI optimises a non-constraint (code generation), which increases work-in-progress without improving delivery speed. Worse, it can decrease stability as higher-quality individual changes overwhelm inadequate integration and deployment infrastructure.

Ninety per cent of organisations have adopted platform engineering. Platform quality varies dramatically. Some treat platforms as internal products with dedicated teams, user research, and continuous improvement. Others create platforms as one-time projects that receive minimal maintenance. The quality difference determines AI’s organisational impact.

Reliability and security serve as baseline expectations. Developers won’t trust platforms that frequently break or expose security risks. Beyond baseline, clear feedback mechanisms rank highest. Platforms that provide quick, actionable feedback on whether changes are safe to deploy enable confident, rapid development.

Holistic developer experience matters more than feature checklists. A platform with every conceivable feature but poor user experience creates friction. A platform with fewer features but excellent experience for the most common workflows enables flow. Integration capabilities increasingly differentiate high-quality platforms.

The strategic implication is clear: invest in platform quality before expecting AI to provide organisational benefits. Treat platform engineering as prerequisite, not afterthought.

Value Stream Management: From Local Wins to Organisational Advantage

The gap between individual productivity and organisational performance has a name: constraints. Value stream management provides the framework for identifying and addressing those constraints systematically.

The data shows teams with strong value stream management practices spend significantly more time on valuable work and see dramatically amplified AI benefits on organisational performance. This isn’t coincidental. Value stream management ensures AI gets applied to actual bottlenecks rather than just acceleration points.

Theory of Constraints provides the mental model. System output is determined by the bottleneck, not by the capacity of non-constraints. Optimising a non-constraint increases work-in-progress accumulation at the bottleneck without improving system throughput. This is precisely what happens when teams apply AI to code generation without addressing downstream constraints.

Consider the common failure pattern. AI dramatically accelerates coding. Developers generate features faster. Code piles up waiting for review. Review becomes the bottleneck. You could add more reviewers, but then deployment infrastructure becomes the bottleneck. You could accelerate deployment, but then testing becomes the bottleneck. Each optimisation shifts the constraint without addressing the systemic issue.

Value stream management maps your entire delivery flow to identify where work actually waits and accumulates. It makes constraints visible. Once visible, you can make conscious decisions about where to apply AI.

If code review is your bottleneck, apply AI there. Use AI to help reviewers understand changes more quickly, identify potential issues, and suggest improvements. If requirements gathering constrains your delivery, use AI to help product teams clarify and document requirements more effectively. If testing is your constraint, invest in AI-assisted test generation and validation.

The teams seeing the strongest organisational benefits from AI aren’t those using AI most intensively for code generation. They’re the teams using value stream management to identify true constraints and applying AI strategically to address them.

This requires a systems view that many organisations lack. Developers focus on code generation productivity because that’s their local view. Engineering leaders focus on team velocity. Executive leaders focus on feature delivery. Without value stream visibility spanning requirements through deployment to user value realisation, each optimises locally without improving globally.

Map one value stream end-to-end. Track where work waits. Measure lead times for each stage. Identify where time accumulates. That’s your constraint. Apply resources (including AI) to that constraint until another stage becomes limiting.

What you’ll often discover: code generation isn’t your constraint. It might be:

  • Requirements clarity
  • Code review capacity
  • Deployment automation
  • Production monitoring
  • Customer feedback cycles

All of these constrain your ability to deliver value to users. AI can help with all of them, but only if you identify where help is needed.

The teams achieving both speed and stability through AI adoption have something in common: they understand their value stream well enough to know where AI will help versus where it will just create more work-in-progress.

Trust, Verification, and the Cultural Reality Check

Thirty per cent of respondents report little to no trust in AI-generated code. Yet 90% use AI regularly and 80% report productivity gains. This apparent contradiction reveals something important about mature AI adoption: healthy scepticism is a feature, not a bug.

The Trust Paradox

The “trust but verify” approach mirrors how experienced developers treat code from any external source. You don’t blindly trust Stack Overflow solutions. You understand them, adapt them, and verify they work in your context. The same professional judgement applies to AI-generated code.

Seventy-eight per cent of respondents express certainty that AI doesn’t diminish their psychological ownership of code. They view AI as a tool working for them, not an autonomous collaborator they must defer to. This ownership mindset matters more as AI capabilities increase.

The temptation exists to treat AI suggestions as authoritative, especially when they’re well-formatted and comprehensive-looking. Maintaining healthy scepticism requires conscious effort and organisational support.

Training should focus on critically evaluating AI output, not just using AI tools efficiently. Develop verification skills alongside prompting skills. Reward behaviour where developers question AI suggestions and improve them, not just behaviour where developers accept and integrate AI output quickly. Create culture where “the AI suggested this but I modified it because…” is celebrated as professional judgement, not treated as productivity slowdown.

Prompt engineering has increased in importance with AI adoption, but so should verification and validation skills. Problem-solving remains critical. Interestingly, the data shows programming language syntax memorisation also increased in perceived importance, which seems counterintuitive in the AI age. This likely reflects an adaptation period where developers are learning to guide AI effectively rather than simply accepting its output.

Individual contributors should maintain control over when and how AI assists them. Toggle inline suggestions for different task types. Use “on-demand only” modes when deep focus is required. Configure AI tools for cognitive load across different kinds of tasks. The goal is treating AI as a tool working for you, not a process you’re working within.

The Friction and Burnout Reality

Despite 80% reporting productivity gains, AI shows no measurable relationship with friction or burnout. Both remain unchanged. This uncomfortable finding deserves attention from engineering leaders celebrating AI productivity improvements.

Friction and burnout are properties of organisational systems and culture, not individual tools. AI helps developers generate code faster. It doesn’t address the meeting overload that fragments their time. It doesn’t resolve unclear requirements that force rework. It doesn’t fix poor communication that creates unnecessary coordination costs. It doesn’t eliminate the organisational dysfunction that creates frustration.

More concerning, some evidence suggests perceived capacity gains invite higher expectations of work output. Stakeholders see developers generating code more quickly and adjust their expectations accordingly. Product managers expect more features in the same timeframe. Leadership expects faster delivery without questioning whether supporting systems have evolved to enable it. The result: developers work at AI-accelerated speeds but experience the same burnout because demands have increased to match capacity.

This work intensification pattern should worry engineering leaders. AI provides an opportunity to use productivity gains in three ways:

  1. Same output, less stress/time
  2. More output, same stress/time
  3. Higher quality, same time (testing, design, refinement)

Most organisations default to option 2 without explicit discussion. Stakeholders unconsciously assume productivity improvements should translate directly to increased output.

The leadership challenge is making conscious choices about how productivity gains get used. Have explicit conversations with teams. Do you want to deliver the same scope with less stress? Ship more features in the same time? Invest more time in quality, testing, and design? The default option (more work, same stress) leads to the burnout persistence the data shows.

Higher AI adoption correlates with more time spent on valuable work and greater authentic pride in accomplishments. This suggests the opportunity exists to offload mundane tasks and focus on problem-solving, design, and creative work. But capturing this opportunity requires intentionality from leadership.

The burnout factors the data identifies aren’t surprising:

  • Low workplace support
  • Lack of workplace justice
  • Low rewards
  • Job insecurity
  • Priority instability

AI doesn’t address any of these root causes. Engineering leaders hoping AI will reduce burnout through productivity gains are addressing symptoms rather than causes. The systemic factors that drive burnout require systemic responses, not better individual productivity tools.

What to Do Monday Morning

The DORA report provides frameworks and insights. Translation to action requires specific steps. Here’s what engineering leaders, teams, and individual contributors should do this week.

For Engineering Leaders

  • Audit your foundation before scaling AI adoption: Use the seven capabilities as a checklist. For each capability, assess where you stand: strong, adequate, or weak. Be honest about gaps:

    • Clear AI policy: do you have one? Is it communicated to all staff?
    • Healthy data ecosystems: can developers access data easily? Is quality high enough for confident decisions?
    • AI-accessible internal context: have you integrated AI tools with internal documentation and codebases?
    • Version control practices: strong enough to handle AI-accelerated change volume?
    • Small batches: do teams resist the temptation to generate large code volumes?
    • User-centric focus: is it genuinely driving prioritisation?
    • Platform quality: does your platform enable or constrain AI benefits?

    Identify your three weakest capabilities. Those are your investment priorities. Don’t expect AI to multiply results until you’ve strengthened these foundations. Fix your systems first, then let AI amplify them.

  • Establish or clarify your AI policy: If you don’t have one, create it this week. If you have one but it’s not widely known, communicate it. Remember that clarity matters more than content. Teams need to know where they stand. Document permitted tools, usage expectations, and how AI fits into workflows. Make the policy applicable to all staff, not just engineering. Support experimentation within clear boundaries.

  • Invest in platform engineering strategically: Treat platform quality as prerequisite for AI ROI, not nice-to-have. If your platform is low quality, AI productivity gains won’t flow through to organisational performance. Focus on holistic developer experience. Prioritise clear feedback mechanisms, which the data shows as most correlated with positive impact. Build integration points for AI tools. Ensure platforms support AI-accelerated workflows, not just human-speed patterns.

  • Implement value stream mapping this month: Start with one value stream. Map it end-to-end from requirement to user value realisation. Track where work waits. Measure lead times for each stage. Identify where time accumulates. That’s your constraint. Apply AI to that constraint, not just to code generation.

    If code review is your bottleneck, use AI to assist reviews. If testing is your constraint, invest in AI-assisted validation. If requirements clarity limits delivery, help product teams use AI for requirements work.

  • Make intentional choices about productivity gains: Have explicit conversations with teams this week. Three options exist: deliver same scope with less stress, ship more features in same time, or invest more time in quality and design. Don’t default to “more work” without discussing it. The data shows burnout persists despite productivity gains because organisations unconsciously choose work intensification. Break that pattern by making conscious choices.

For Engineering Teams

  • Centre user needs ruthlessly: The data shows that without user-centric focus, AI can harm team performance. Make user needs your North Star for all AI-assisted work. Before accelerating implementation with AI, validate that you’re solving actual user problems. Don’t let AI’s speed obscure whether you’re building the right things. Incorporate rich user understanding into roadmaps and prioritisation decisions.

  • Work in small batches deliberately: Resist AI’s temptation to generate large chunks of code. Break work into smaller pieces. Integrate frequently. Get feedback early and often. The data shows this reduces friction and improves product performance, even whilst slightly reducing individual effectiveness scores. Prioritise outcomes over output volume.

  • Adopt “trust but verify” as a team practice: Healthy scepticism about AI output is a sign of maturity. Build validation practices into your workflow. Review AI-generated code as critically as you’d review code from any external source. Question suggestions. Improve them. Maintain technical judgement as a core skill. Celebrate when team members modify AI suggestions based on professional judgement.

  • Strengthen version control practices now: Before AI accelerates your change volume further, ensure your version control habits can handle it. Commit frequently to create safe checkpoints. Ensure strong rollback capabilities so experiments that don’t work out aren’t disasters. Improve branching strategies to enable parallel exploration. Use version control as a psychological safety net for AI-enabled experimentation.

For Individual Contributors

  • Customise AI tools for your workflow: Don’t accept default configurations. Toggle inline suggestions based on task type. Use “on-demand only” modes when deep focus is required. Configure differently for mechanical versus interpretive work. Maintain control over when and how AI assists. The goal is AI working for you, not you adapting to AI’s pace.

  • Maintain ownership mindset: View AI as a tool, not a collaborator you defer to. Stay in the driver’s seat. Develop both prompt engineering and verification skills. Balance delegation with validation. Remember that 78% of developers are certain AI doesn’t diminish their psychological ownership. That ownership requires conscious maintenance as AI capabilities increase.

Quick Wins to Try This Week

Pick one or two of these to implement immediately:

  • Map one value stream: Spend two hours tracking one feature from initial request through to user value realisation. Note where it waits. That’s where AI could help most, and it’s probably not code generation.

  • Audit your platform’s feedback mechanisms: How long does it take to know if a change is safe to deploy? How clear is the feedback? If feedback is slow or unclear, that’s constraining AI’s organisational benefit regardless of code generation speed improvements.

  • Have an explicit conversation about productivity gains: Gather your team for 30 minutes. Ask: “AI is helping us generate code faster. Do we want to use that to deliver the same scope with less stress, ship more features, or invest more time in quality?” Make a conscious choice instead of defaulting to work intensification.

  • Implement one “small batch” constraint: Next time you reach for AI to generate a large feature, break it into three smaller pieces instead. Integrate the first, get feedback, then proceed. Notice whether this reduces rework and improves outcomes.

  • Document your team’s AI usage expectations: Write a one-page document answering: which tools are we using? What types of work are appropriate for AI assistance? What verification practices do we expect? Share it with the team and revisit it monthly.

These aren’t comprehensive solutions. They’re starting points that align with what the data shows matters most. Pick the actions that address your biggest gaps in the seven foundational capabilities.

The Transformation Imperative

AI adoption isn’t tool deployment. It’s organisational transformation comparable to cloud migration, Agile adoption, or DevOps implementation. Each of those transitions required intentional changes to workflows, roles, governance, and culture. Teams that treated them as just new tools struggled. Teams that recognised them as transformation opportunities thrived.

The same pattern holds for AI. Without intentional transformation, AI remains isolated productivity boosts in an unchanged system. Individual developers generate code faster. But deployment processes, code review workflows, testing infrastructure, and organisational decision-making haven’t evolved to handle AI-accelerated development. The productivity gains get absorbed by unchanged constraints.

The evidence is clear. Forty per cent of teams achieve both high throughput and high stability with AI. They’ve done the work to transform their systems. The seven foundational capabilities provide the actionable framework. Data-driven paths exist from local wins to organisational advantage. The possibility is real: spend more time on valuable, meaningful work whilst delivering faster and maintaining quality.

But the challenge is equally clear. Speed gains are outpacing stability solutions. Productivity improvements aren’t automatically reducing burnout. Individual effectiveness isn’t automatically translating to team performance. The gap between potential and reality comes down to intentional leadership and systematic approaches.

The Instability Gap

The instability persistence particularly demands attention. Teams have adapted AI for speed. They haven’t adapted verification infrastructure to maintain quality at AI speeds. This isn’t just a technical gap. It’s a strategic vulnerability.

As I’ve written about in previous posts on testing discipline, verification infrastructure must match code generation acceleration. Fast feedback loops need to keep pace with AI-generated volume. Quality practices are prerequisites, not optional extras.

“Trust but verify” sounds reasonable until you realise most organisations haven’t invested in the verification capabilities that AI-accelerated development requires. The trust is appropriate. The verification infrastructure often isn’t adequate. This gap explains why instability persists despite improving individual code quality.

The Timeline

We’re still early in this transformation. The median developer has 16 months of AI experience. Practices will continue evolving. The tools will improve. But the fundamental dynamic won’t change: AI amplifies what you have.

Organisations building strong foundations now will compound their advantages as AI capabilities increase. Those chasing productivity gains without addressing systemic constraints will find the gap between individual effectiveness and organisational performance widening further.

The Question

The 2025 DORA report makes one thing unambiguously clear: AI is an amplifier, not a solution. The question for engineering leaders isn’t “how do we adopt AI?” It’s “what do we want AI to amplify?”

If your answer is “our current delivery capabilities,” then audit your seven foundational capabilities, map your value streams, invest in platform quality, and make intentional choices about how productivity gains get used. Fix your systems. Build your foundations. Centre your users. Then let AI multiply your results.

If your answer is “well, we need to fix some things first,” then the data validates that instinct. Fix those things. AI will still be there, and it will be more powerful when you’re ready for it. The organisations succeeding with AI aren’t those that adopted it first. They’re those that built the foundations that let AI safely amplify their capabilities.

The data shows the path. Your seven capabilities audit shows where you stand. The question is whether you’ll make the systematic investments required to transform AI from individual productivity tool into organisational force multiplier.

The choice, and the opportunity, are yours.

About The Author

Tim Huegdon is the founder of Wyrd Technology, a consultancy focused on helping engineering teams achieve operational excellence through strategic AI adoption. With over 25 years of experience in software engineering and technical leadership, Tim specialises in translating engineering research into actionable strategies for teams navigating AI-assisted development. His work focuses on the systemic factors that determine whether AI amplifies organisational capabilities or exposes delivery constraints. Having observed the gap between individual productivity gains and organisational performance across multiple teams, he helps engineering leaders build the foundational capabilities that enable AI to safely accelerate delivery whilst maintaining stability and quality.

Tags:Data-Driven Development, Devops, DORA Metrics, Engineering Excellence, Engineering Leadership, Human-AI Collaboration, Operational Excellence, Organisational Performance, Platform Engineering, Quality Engineering, Software Delivery, Software Engineering, Systematic Thinking, Team Effectiveness, Technical Strategy, Testing Practices, Value Stream Management