The Great Wait: How AI Expectations Are Freezing the Software Engineering Job Market
Published:
The software engineering job market has entered an unprecedented state of paralysis. Senior engineers with decades of experience find themselves competing for junior roles, whilst companies post job advertisements that sit unfilled for months. Behind closed doors, engineering leaders whisper about AI replacing their teams, and CFOs question whether they need to hire engineers at all.
This pattern is particularly evident across the UK and Australian markets, where I observe similar contractions in engineering hiring through my consulting work. The phenomenon extends to other developed economies including the United States and European markets, suggesting this isn’t merely a regional correction but a global strategic miscalculation driven by AI expectations.
I believe something more fundamental is happening: companies are waiting to see how many engineering roles artificial intelligence can eliminate before committing to human talent. This approach represents a strategic miscalculation that will ultimately harm the organisations that embrace it whilst creating opportunities for those who continue investing in human expertise.
The Hiring Freeze Reality
The numbers tell a stark story. There are 35% fewer software developer job listings on Indeed today, than five years ago, whilst 25% of engineers said it took them a year to find a new job according to recent polling. Job applications that once received responses within days now disappear into algorithmic voids, and companies that previously competed aggressively for talent now operate with skeleton crews.
The data reveals a disproportionate contraction: whilst across Indeed, 10% more jobs are listed today than in February 2020, there are 35% fewer listings for software developers. Job openings for mobile engineers, frontend engineers and data engineers all dropped more than 20% from a year ago. This isn’t merely another economic downturn affecting all sectors equally.
High-profile examples are accelerating the trend. Salesforce announced a hiring freeze for software engineers in 2024, attributing it to a 30 per cent productivity increase achieved through its AI tools. CEO Marc Benioff explicitly stated “My message to CEOs right now is that we are the last generation to manage only humans”, signalling a fundamental shift in how companies view human talent.
According to the World Economic Forum’s 2025 Future of Jobs Report, 41% of employers worldwide intend to reduce their workforce in the next five years due to AI automation. The underlying message is clear: why hire humans when AI might soon do the job?
The psychological impact extends beyond individual career anxiety. Entire engineering communities are questioning their professional worth. Bootcamp graduates find entry-level positions have vanished, mid-level engineers discover their skills are deemed “automatable,” and even senior engineers face scepticism about their value proposition.
Meanwhile, the data shows a tale of two different worlds. Whilst job openings for most engineering roles declined by 20% or more, the demand for AI research scientists and machine learning engineers has boomed, with 80% growth for AI scientists and 70% for machine learning engineers. This stark divergence reinforces the perception that AI expertise is the only safe harbour in a contracting market.
However, highly regulated industries like financial services, healthcare, and automotive face additional constraints on AI adoption due to compliance requirements, safety standards, and liability concerns. A senior engineer at a major Australian bank recently confided that whilst their organisation experiments with AI tools, regulatory requirements from APRA and risk management protocols mean human oversight remains non-negotiable for production systems. Similarly, in the UK, financial services firms operating under FCA regulation find that AI adoption must maintain clear audit trails and human accountability that current tools cannot provide independently.
The AI Replacement Fallacy
The belief that AI can simply replace experienced engineers represents a fundamental misunderstanding of what engineering actually entails. Recent research provides compelling evidence that this view is not only wrong but actively counterproductive.
A rigorous study by METR involving experienced open-source developers found that when developers used AI tools, they took 19% longer to complete tasks than when working without AI assistance. Even more telling, developers expected AI to speed them up by 24%, and even after experiencing the slowdown, they still believed AI had sped them up by 20%. This disconnect between perception and reality reveals how easily we can be misled by AI’s apparent capabilities.
The study’s methodology was particularly robust: 16 experienced developers from large open-source repositories (averaging 22,000+ stars and 1 million+ lines of code) worked on real issues they had identified as valuable to their projects. When allowed to use frontier AI tools like Cursor Pro with Claude 3.5 Sonnet, they consistently took longer than when working without AI assistance, despite having full autonomy over their AI tool usage.
The METR research revealed significant factors contributing to AI-induced slowdown: increased context switching, over-reliance on AI suggestions that required substantial revision, and time spent learning to use AI tools effectively. These factors are invisible to simple productivity metrics but represent real costs in production environments.
The research helps explain the gap between AI benchmark performance and real-world utility. Whilst AI tools excel at well-defined, algorithmically scorable tasks typical of benchmarks, they struggle with the implicit requirements that characterise production software development: code quality standards, documentation requirements, testing coverage, and integration with existing systems. As the METR researchers noted, AI capabilities may be “comparatively lower in settings with very high quality standards, or with many implicit requirements relating to documentation, testing coverage, or linting/formatting.”
The implications become particularly stark in regulated industries where compliance and auditability requirements add layers of complexity that AI tools cannot navigate independently. A healthcare software system must meet HIPAA requirements, maintain detailed audit trails, and ensure patient safety. Similarly, financial services software must comply with banking regulations, maintain transaction integrity, and provide forensic capabilities that require human understanding of regulatory intent.
Engineering work involves problem definition, requirement clarification, architectural decisions, and stakeholder communication. These activities require human judgement, domain expertise, and the ability to navigate ambiguity. A code generation tool might produce a function that sorts data efficiently, but it cannot determine whether sorting is the appropriate solution to the underlying business problem, nor can it ensure compliance with industry-specific regulations.
The communication skills that enable effective human collaboration become even more critical when working with AI tools. As I explored in previous analysis of communication frameworks, teams with strong written communication practices find that AI accelerates their existing strengths, whilst teams with poor communication discover that AI multiplies their existing problems. Creating effective prompts requires exactly the same skills needed for clear human communication: precise context setting, explicit requirement specification, and structured output expectations.
The replacement fallacy also ignores the collaborative nature of engineering teams. Software development occurs within organisational contexts where engineers must understand business objectives, communicate with non-technical stakeholders, and coordinate with other teams. These interpersonal and strategic capabilities aren’t peripheral to engineering work: they’re central to delivering value.
Evidence from Research and Practice
Research provides compelling evidence about the divergent outcomes of different AI adoption strategies. The RAND Corporation’s comprehensive study of AI project outcomes found that more than 80 percent of AI projects fail; this represents twice the rate of failure of information technology projects that do not involve AI. The study identified five leading root causes of failure, with misunderstandings about project purpose and domain context being the most common reason for AI project failure.
This high failure rate aligns with findings from other research. A systematic study of AI adoption in software testing found a significant gap between expectations and reality, noting that “AI has not yet brought about a revolutionary transformation of software testing in the software industry” despite high expectations. The research confirmed that whilst numerous potential use cases exist, “reported actual implementations and observed benefits were limited.”
The contrast with successful implementations is instructive. Google’s research on AI in software engineering documents measurable productivity improvements from their internal AI tools, but with important caveats. Their success comes from a methodical approach: building AI features that “naturally blend into users’ workflows,” focusing on “making non-controversial fixes that save time and take cognitive load off the developer,” and extensive measurement of actual impact rather than relying on perceived benefits.
Google’s approach emphasises human-AI collaboration rather than replacement. Their internal guidance encourages developers to adopt AI to boost productivity while stressing “the importance of humans in maintaining security.” As their research notes, “code review, security and maintenance all areas that require further work” from human engineers, even when AI generates the initial code.
Meanwhile, a recent study shows that adoption of GitHub Copilot is actually associated with a small increase in software-engineering hiring, and that new hires are required to have fewer advanced programming skills, not more strategic capabilities. This finding reveals a fundamental misunderstanding in the replacement narrative. Companies using AI tools effectively recognise they need human engineers to provide context, direction, and strategic oversight.
These documented patterns reveal why wholesale replacement strategies fail whilst augmentation approaches succeed. Companies that focus on specific, measurable improvements to existing workflows see genuine benefits. Those that attempt dramatic restructuring based on AI potential face the statistical reality that most AI projects do not deliver expected outcomes.
What AI Actually Does
Understanding AI’s actual capabilities provides clarity about where it adds value versus where human expertise remains essential. Current AI tools excel at pattern recognition, code generation for well-defined problems, and automating repetitive tasks. Google CEO Sundar Pichai revealed that AI now writes over 25% of new code at Google, and companies report productivity gains of up to 35% from AI coding tools. These statistics demonstrate genuine value in specific contexts.
However, this data requires careful interpretation alongside emerging research findings. The METR study’s discovery that experienced developers work 19% slower with AI tools suggests that productivity measurements may not capture the full complexity of software development work. The 25% code generation figure represents syntactic output, not complete engineering solutions. The productivity gains measure specific, isolated tasks rather than end-to-end software delivery including quality assurance, documentation, and integration requirements.
The disconnect appears to stem from the difference between benchmark performance and real-world application. AI tools perform impressively on algorithmic problems with clear success criteria, but struggle when faced with the implicit requirements and quality standards that define production software development.
The most effective applications of AI in engineering involve human-AI collaboration rather than replacement. Engineers use AI tools to accelerate routine tasks, generate boilerplate code, and explore implementation options. This augmentation allows human engineers to focus on higher-value activities: problem definition, architectural design, and strategic decision-making.
The collaborative model also addresses AI’s limitations whilst leveraging its strengths. Human engineers provide context, evaluate AI-generated solutions, and make decisions about appropriateness and quality. They understand system constraints, user requirements, and business priorities that AI tools cannot access.
The Experience Premium
Contrary to the replacement narrative, senior engineers become more valuable in an AI-enabled world, not less. Their experience provides the context and judgement necessary to effectively utilise AI tools whilst avoiding their pitfalls. The METR study’s findings underscore this point: even experienced developers struggled to achieve productivity gains with AI tools, suggesting that effective AI usage requires significant expertise and learning.
The communication skills that experienced engineers have developed become particularly valuable in an AI context. Senior engineers can articulate problems clearly, provide precise context to AI tools, and translate between technical and business requirements. They understand how to structure prompts effectively, recognise when AI outputs need human refinement, and communicate AI capabilities and limitations to stakeholders.
The research revealed that AI capabilities are “comparatively lower in settings with very high quality standards”: precisely the environments where senior engineers operate. Experienced engineers understand when AI suggestions are appropriate, how to adapt generated code to specific requirements, and where human oversight is essential. They can identify when generated code will perform poorly, violate security requirements, or create maintenance problems.
Experienced engineers also possess the domain knowledge necessary to evaluate AI outputs critically. They can identify when generated code will perform poorly, violate security requirements, or create maintenance problems. This ability to assess and refine AI-generated solutions represents a new form of engineering skill that combines technical knowledge with AI literacy.
The strategic thinking that senior engineers provide becomes more important as AI handles routine tasks. Companies need engineers who can define problems clearly, make architectural decisions, and guide AI tools towards appropriate solutions. These activities require understanding of business context, technical constraints, and long-term implications that AI cannot provide.
The METR study also highlights the importance of learning effects and adaptation periods. Their research suggests that “there may be strong learning effects for AI tools like Cursor that only appear after several hundred hours of usage.” This finding implies that organisations benefit from maintaining experienced teams who can develop effective AI-human collaboration patterns over time, rather than expecting immediate productivity gains from AI adoption.
The Process Enhancement Opportunity
Whilst companies fixate on AI’s ability to generate code, they’re missing a more strategic application: using AI to improve the pre-engineering processes that often determine project success or failure. The METR study’s findings suggest that AI struggles with ambiguous, poorly-defined problems; precisely the challenges that plague software engineering teams when requirements are unclear or specifications are incomplete.
Consider the typical waste in software development cycles: engineers spending hours clarifying vague requirements, reworking solutions when stakeholder expectations weren’t properly captured, or building features that don’t address actual user needs. These inefficiencies stem from problems in requirements gathering, work specification, and stakeholder communication; areas where AI could provide substantial value without replacing human engineers.
AI tools excel at processing large amounts of unstructured information, identifying patterns in user feedback, and helping translate business objectives into technical specifications. They can assist in analysing user research data, consolidating disparate stakeholder inputs, and creating more precise requirements documentation. This application leverages AI’s strengths in pattern recognition whilst avoiding the pitfalls revealed by recent research on AI-assisted coding.
The lean development principle of eliminating waste becomes particularly relevant here. If AI can reduce the ambiguity that leads to rework, false starts, and misaligned features, the productivity gains could far exceed what’s achievable through code generation. Engineers would receive better-defined problems, clearer success criteria, and more coherent project specifications, enabling them to focus on creative problem-solving rather than deciphering unclear requirements.
This approach also addresses the quality standards challenge identified in the METR research. Rather than asking AI to meet the implicit requirements of production code (documentation, testing, integration), AI would help make those requirements explicit and well-defined before engineering work begins. Human engineers would then apply their expertise to implementation, architecture, and technical decision-making with clearer direction and reduced ambiguity.
When Will the Great Wait End?
The current hiring freeze cannot sustain indefinitely. Several factors will likely force a resolution within the next 12-18 months.
Competitive pressure is already mounting. Companies maintaining skeleton engineering teams are struggling to deliver innovation at the pace required by competitive markets. Early evidence suggests that organisations continuing to invest in engineering talent are shipping features faster and responding more effectively to market opportunities.
AI capability plateaus present another challenge. Current AI tools are approaching practical limitations in software engineering applications. Whilst incremental improvements continue, the revolutionary breakthroughs needed to justify wholesale human replacement face fundamental technical barriers that may take years to overcome, if they prove possible at all.
Regulatory intervention will compound these pressures. European regulatory frameworks like the EU AI Act, Australian AI governance initiatives, and emerging UK legislation will likely require human oversight for AI systems in many applications. Companies that have eliminated engineering expertise may find themselves non-compliant with evolving requirements across multiple jurisdictions.
The economic reality of under-staffed teams is becoming impossible to ignore. Technical debt accumulates faster, system maintenance suffers, and the ability to respond to urgent business requirements diminishes. A recent analysis suggests that companies reducing engineering headcount by 30% while waiting for AI solutions face productivity decreases of 40-50% in complex software delivery, creating a negative return on the cost savings.
The market data supports this concern: whilst January 2025 saw the lowest job openings in professional services since 2013, representing a 20% year-over-year drop, the software development sector is expected to grow 17% by 2033. This divergence suggests that current hiring freezes are tactical responses to AI hype rather than strategic decisions based on long-term market realities.
The most likely scenario sees a gradual resumption of engineering hiring beginning in Q3 2025, accelerating through 2026 as companies recognise the limitations of AI-only strategies. Organisations that maintained engineering capabilities during this period will be positioned to capitalise on the talent that becomes available as the market corrects.
Strategic Recommendations
For Engineering Leaders:
Embrace AI as a process enhancement tool rather than a code replacement technology. Focus AI adoption on requirements gathering, stakeholder communication, and work specification where AI can reduce ambiguity and waste without the quality control challenges revealed by recent research. Develop clear frameworks for AI adoption that specify where AI tools are appropriate and where human judgement is essential.
Use this period to strengthen your engineering culture and knowledge-sharing practices whilst experimenting with AI for process improvement. Document domain knowledge, establish mentorship programmes, and create learning environments that develop both technical and AI literacy skills. Focus on AI applications that multiply human effectiveness rather than attempting to replace human capabilities.
For Organisations:
Recognise that the current hiring freeze creates opportunities to recruit exceptional talent that may not have been available during previous market conditions. Companies that continue investing in human capital whilst competitors hesitate can build stronger engineering teams and gain competitive advantages through better AI adoption strategies.
Develop comprehensive AI strategies that focus on process enhancement rather than human replacement. Instead of viewing AI as a cost-reduction mechanism through engineer replacement, treat it as a capability enhancer that reduces waste and ambiguity in the development process. Resist the temptation to make hiring decisions based solely on AI potential.
For Engineers:
The immediate priority should be developing skills that complement AI capabilities rather than competing with them. Focus on problem-solving, communication, domain expertise, and strategic thinking. Learn to work effectively with AI tools whilst maintaining critical evaluation skills that prevent overreliance on automated solutions.
Communication skills deserve particular emphasis in an AI-enabled environment. The same precision required for clear human communication enables effective AI interaction. Develop your ability to articulate problems clearly, provide structured context, and translate between technical and business requirements.
Position yourself as someone who enhances AI capabilities rather than competes with them. In interviews, demonstrate how you’ve used AI tools to improve your productivity whilst maintaining quality and strategic oversight. Target organisations that view AI as an augmentation tool rather than a replacement technology.
Document your experience with AI tools systematically. Maintain records of when AI helped, when it hindered, and what factors determined success or failure. This evidence-based approach to AI collaboration will differentiate you from candidates who either reject AI entirely or accept it uncritically.
Implementation Guidance
Week 1: Assessment and Baseline
- Audit your last three projects for communication-related delays or rework
- Document current AI tool usage patterns across your team (who uses what, how often, with what results)
- Baseline key metrics: time from requirements to delivery, clarification requests per project, scope changes due to misunderstood requirements
- Survey your team on AI tool effectiveness using specific examples rather than general satisfaction
Week 2: Framework Development
- Create requirements clarification templates with 5W1H structure (Who, What, When, Where, Why, How)
- Establish AI usage guidelines covering appropriate use cases, quality expectations, and review processes
- Begin prompt library documentation: capture 5-10 effective prompts your team has discovered
- Implement structured status update format that includes progress towards business objectives, not just task completion
Week 3: Communication Infrastructure
- Start weekly written communication practice: have team members write technical summaries or decision explanations
- Create feedback loops for AI tool usage: what worked, what didn’t, why
- Establish decision logging process for both technical and business choices
- Begin systematic documentation of AI tool limitations alongside capabilities
Week 4: Measurement and Iteration
- Track communication effectiveness metrics: reduction in clarification requests, stakeholder satisfaction with updates
- Measure AI productivity impact using controlled comparison (similar tasks with/without AI assistance)
- Create escalation communication templates for surfacing blockers with appropriate context
- Schedule monthly retrospectives focused specifically on communication and AI collaboration outcomes
Ongoing Practices:
- Weekly prompt sharing sessions where team members demonstrate effective AI interactions
- Monthly communication skills development through structured writing exercises
- Quarterly assessment of AI tool effectiveness using evidence-based criteria rather than developer self-reports
What to Do Next
Begin implementing controlled trials of AI tool adoption within your existing teams. Measure actual productivity impacts using methodologies similar to the METR study rather than relying on vendor promises or developer self-reports. Focus initial AI investments on requirements gathering and process improvement rather than code generation.
Resist pressure to make dramatic workforce reductions based solely on AI potential. Consider the full cost of under-staffed engineering teams, including technical debt accumulation, reduced innovation capacity, and delayed market responsiveness. View the current hiring freeze as an opportunity to recruit exceptional talent whilst competitors hesitate.
Track meaningful metrics: time from requirements to delivery, defect rates in production, customer satisfaction scores, and innovation velocity. These indicators reveal the true impact of different AI adoption strategies far better than isolated productivity measurements on individual coding tasks.
The choice is clear: embrace the collaborative potential of human-AI teams or risk being outcompeted by organisations that understand the strategic value of both human insight and artificial intelligence. The market will ultimately reward those who make the right decision.
The Competitive Advantage
The organisations that will thrive in the AI era are those that understand the complementary relationship between human expertise and artificial intelligence. Rather than viewing AI as a replacement for human engineers, they see it as a tool that amplifies human capabilities and enables more strategic work, particularly through process enhancement rather than direct code generation.
These companies are using the current hiring freeze as an opportunity to build stronger engineering teams with exceptional talent whilst developing more sophisticated AI adoption strategies. They understand that AI tools require skilled human operators and that the greatest productivity gains come from reducing ambiguity and waste in the development process, not from replacing human technical expertise.
The strategic advantage comes from combining AI capabilities with human insight, creativity, and judgement. This approach enables faster development cycles through better-defined requirements, higher-quality solutions through maintained human oversight, and more innovative products through enhanced human creativity. Companies that master this combination will outperform those that rely on AI alone or ignore AI entirely.
The current market conditions create a natural experiment that will reveal which approach proves most effective. Companies betting on AI replacement will compete against those investing in human-AI collaboration. The evidence increasingly suggests that the collaborative approach provides sustainable competitive advantages.
Insights from Consulting Practice:
Through Wyrd Technology’s work with organisations across different stages of AI adoption, clear patterns emerge. Companies that successfully integrate AI tools typically follow a three-phase approach: first, they identify specific process inefficiencies where AI can add value; second, they train existing engineering teams to work effectively with AI tools; third, they measure actual productivity improvements rather than relying on vendor claims or developer self-reports.
One client, a mid-sized financial services company in Sydney, initially planned to reduce their engineering team by 25% based on projected AI productivity gains. After conducting controlled trials similar to the METR study methodology, they discovered that AI tools helped with specific tasks but created bottlenecks in others. Rather than cutting staff, they redirected AI investment toward requirements analysis and customer communication, achieving measurable improvements in project delivery times whilst maintaining their engineering headcount.
This experience aligns with previous analysis I’ve published about AI pragmatism in engineering organisations: the most successful AI adoption strategies focus on human-AI collaboration rather than human replacement, emphasise measurable outcomes over marketing claims, and maintain strong engineering capabilities as the foundation for AI tool effectiveness.
Conclusion
The software engineering job market’s current paralysis reflects a fundamental misunderstanding of both AI capabilities and human value. The statistics paint a clear picture: 35% fewer software engineering job listings compared to five years ago, yet the software development sector is projected to grow 17% by 2033. This disconnect reveals that companies waiting for AI to eliminate engineering roles are making strategic errors that will ultimately harm their competitive position whilst creating opportunities for organisations that continue investing in human talent.
The path forward requires recognising that AI and human engineers are complementary rather than competitive. The evidence supports this: companies using AI tools like GitHub Copilot are actually hiring more engineers, not fewer, because they understand that AI amplifies human capabilities rather than replacing them. The most successful organisations will be those that combine AI capabilities with human insight to create more productive and innovative engineering teams.
For engineers facing this challenging market, the key is developing skills that complement AI capabilities whilst maintaining the problem-solving and communication abilities that remain uniquely human. For engineering leaders, the opportunity lies in building stronger teams through strategic hiring whilst developing effective AI adoption frameworks.
The great wait will eventually end, but the companies that use this period to invest in human-AI collaboration will be best positioned for the future. Those that simply wait for AI to solve their talent challenges may find themselves left behind by competitors who understood the real value of combining human expertise with artificial intelligence.
This analysis draws from ongoing research and consulting work with organisations navigating AI adoption strategies. For more insights on engineering leadership and AI pragmatism, explore previous articles on systematic approaches to AI adoption and team effectiveness. Companies interested in developing evidence-based AI adoption strategies can reach out for consultation on human-AI collaboration frameworks.
About the Author
Tim Huegdon is the founder of Wyrd Technology, a consultancy that helps engineering teams achieve operational excellence through structured communication frameworks and process improvement. With over 25 years of experience in software engineering and technical leadership, Tim specialises in helping organisations navigate the complexities of AI adoption whilst building the communication infrastructure and organisational capabilities that enable teams to scale effectively. He guides engineering leaders in implementing evidence-based AI strategies that emphasise human-AI collaboration over replacement, systematic approaches to talent management during market uncertainty, and the communication frameworks necessary to bridge technical and business stakeholders during periods of technological change.