The Expertise Discount

Published:

A few weeks ago, a post crossed my LinkedIn feed that captured something I’ve been watching with growing concern. Someone with a PhD in AI-related fields had just accepted a job as a delivery driver. The tone wasn’t angry or resentful: it was the resigned humour of someone who understood the irony. Their expertise had become unemployable during an AI boom.

This isn’t an isolated case. In July, I documented how companies were freezing engineering hiring, waiting to see whether AI would eliminate the need for human engineers. What’s happened since isn’t a resumption of hiring: it’s an active devaluation of expertise. Companies are investing billions in AI whilst simultaneously discounting the knowledge needed to use it effectively.

The pattern is remarkably specific. Junior roles have collapsed by 35% whilst senior salary growth has stagnated at 0.61%, well below the 2.9% inflation rate. Staff and Principal engineering roles are being explicitly targeted in layoffs, yet companies are paying recent graduates $200,000 to $1,000,000 starting salaries if they’re “AI native.” The market is making a clear statement: it values youth over experience, AI fluency over engineering judgement.

This is bubble behaviour. The parallels to previous technology cycles are striking: dotcom, Big Data, Data Science. Each followed the same pattern of expertise devaluation followed by painful market correction. The question isn’t whether correction will occur: it’s how much damage accumulates before it does.

The Wait Became a Discount

In July 2025, the software engineering job market was frozen. Senior engineers competed for junior roles, job postings sat unfilled for months, and behind closed doors, CFOs questioned whether they needed to hire engineers at all. Companies were waiting.

That wait evolved into something more aggressive. The data reveals a deliberate shift:

Hiring collapse:

Salary stagnation:

  • Senior engineers: 0.61% growth (inflation: 2.9%)
  • UK seniors: 0.3% growth
  • Both represent real wage declines

Layoffs targeting seniors:

  • Amazon: 40% of 4,700 layoffs were engineers, senior managers and Principals explicitly targeted
  • TCS: 12,000 mid and senior-level employees eliminated for “AI-driven efficiency”
  • Microsoft, HP, F5: similar patterns

“AI native” premium:

  • Scale AI: $200,000 base for recent graduates
  • Databricks: up to seven figures for under-25 candidates
  • AI engineers: $120,000 to $300,000+ annually

The pattern isn’t subtle: companies are explicitly replacing expensive senior engineers with cheaper juniors paired with AI tools. The assumption underlying this strategy deserves examination.

The Assumption

Marc Benioff, Salesforce CEO, stated: “My message to CEOs right now is that we are the last generation to manage only humans.” Salesforce announced a hiring freeze for software engineers in 2024, citing a 30% productivity increase from AI tools.

The World Economic Forum’s 2025 report found 41% of employers worldwide intend to reduce workforce in the next 5 years due to AI automation. When 92% of engineering leaders plan to hire fewer juniors, the implication extends beyond entry-level roles: if AI makes juniors as productive as seniors, why maintain expensive senior headcount at all?

Scale AI’s Head of People made the logic explicit: “Scale AI was eager to hire AI-native professionals, and many of those candidates are early in their careers.” The framing reveals the assumption: younger workers understand AI better, work faster with AI tools, and deliver equivalent output at lower cost.

The “AI native” designation serves as proxy for age, similar to how “digital native” once did. It’s a socially acceptable way to express preference for younger workers without triggering age discrimination concerns. The research on digital natives, however, provides an uncomfortable precedent.

Paul Kirschner’s study in Teaching and Teacher Education found no evidence that people born after 1980 learn differently, are better with technology, or superior at multitasking. The LSE Business Review 2024 found that 79% of workers in intergenerationally diverse teams showed interest in adopting new technology, compared to 67% in age-homogeneous teams. Generation Singapore successfully retrained 40% of mid-career workers (40+) for cloud, DevOps, and data roles.

The “AI native” myth appears to be digital native bias in new clothing. The question is whether the underlying assumption about AI productivity holds up to scrutiny.

The Reality

In early 2025, METR conducted the most rigorous study yet of AI tools and developer productivity (which I’ve discussed in detail previously). They recruited 16 experienced developers from large open-source repositories, averaging 22,000+ stars and 1,000,000+ lines of code. These developers used frontier AI tools, Cursor Pro with Claude 3.5 Sonnet, on real issues they had identified as valuable to their projects.

The results were striking.

Tasks took 19% longer with AI assistance than without. Developers expected AI to speed them up by 24%. Even after experiencing the slowdown, they still believed AI had sped them up by 20%.

This perception-reality gap explains the entire bubble. Companies believe AI is accelerating development because that’s what developers report. Rigorous measurement shows experienced developers working slower, but the subjective experience feels faster. The overhead is invisible: increased context switching, time spent revising AI suggestions, learning to prompt effectively.

The METR study revealed that AI capabilities are “comparatively lower in settings with very high quality standards, or with many implicit requirements relating to documentation, testing coverage, or linting/formatting.” Production software development lives entirely in this space.

MIT and Microsoft Research found different patterns for junior versus senior developers. Juniors completed tasks 39% faster with GitHub Copilot. Seniors improved by only 8% to 16%. This appears to validate the “Junior + AI = Senior” assumption until you examine what each group was working on.

Faros AI data shows seniors ship 2.5 times more AI-generated code than juniors. Seniors use AI strategically on complex problems. Juniors use it as a crutch on simple tasks. The productivity measurements aren’t comparable: they’re measuring different types of work with different quality standards and different business impact.

Stack Overflow’s 2025 Developer Survey found only 2.6% of experienced developers “highly trust” AI output, whilst 20% actively distrust it. This isn’t resistance to change: it’s appropriate scepticism for roles with accountability. When production systems fail at 2 AM, trust in AI outputs becomes a liability rather than an asset.

Hidden Costs Accumulating

The RAND Corporation studied AI project outcomes across organisations. More than 80% of AI projects fail: twice the rate of non-AI IT projects. The leading root cause: misunderstandings about project purpose and domain context. Exactly what senior engineers prevent.

I’ve documented the 18-month cliff pattern in previous analysis of AI adoption costs. It follows a predictable trajectory:

Months 1-6: Apparent success

  • Features ship quickly, velocity metrics impressive
  • Stakeholders celebrate productivity gains
  • Foundational problems accumulate invisibly

Months 6-12: Debt accelerates

  • Team maintains velocity through shortcuts
  • AI-dependent juniors make poor architecture decisions
  • Code reviews become superficial
  • Testing practices deteriorate: TDD abandoned, flaky tests accepted, test debt accumulates

Months 12-18: Foundations crack

  • Codebase becomes difficult to modify
  • AI solutions prioritised functionality over maintainability
  • Performance degrades
  • Remaining seniors spend more time firefighting than building

Month 18+: Crisis mode

  • Systems fail under stress
  • Team cannot diagnose complex issues quickly
  • Simple changes require disproportionate effort
  • External consultants necessary at $1,500-$2,000 per day

Gartner found that fewer than 30% of CEOs are satisfied with AI ROI despite average spending of $1.9 million. 42% of companies abandoned AI initiatives before reaching production, up from 17% year-over-year. These aren’t implementation problems: they’re fundamental misunderstandings about what AI can deliver.

Microsoft Research examined AI debugging capabilities. Claude 3.7 Sonnet succeeded on 48.4% of debugging tasks. OpenAI o1: 30.2%. OpenAI o3-mini: 22.1%. Their conclusion: “Complex debugging remains an area where human expertise provides irreplaceable value. Whilst AI assistants excel as research and automation tools, the creative, hypothesis-driven nature of challenging debugging problems requires human insight, intuition, and adaptability that current AI cannot match.”

The 2 AM production failure scenario exposes the strategy’s flaw. The junior engineer on call receives the page but cannot diagnose beyond running standard playbooks. AI tools prove useless for novel system failures or cascading problems. No senior engineers possess deep enough system knowledge to guide rapid recovery because the team was built around AI augmentation, not human expertise.

Consider a specific pattern I’ve observed across multiple organisations. An e-commerce platform experiences intermittent checkout failures affecting 3% of transactions. The error logs are clean. Standard monitoring shows nothing unusual. The junior on-call engineer queries the AI tool, which suggests common causes: database connection pooling, race conditions, cache invalidation. None apply.

A senior engineer would recognise this as a symptoms-first problem requiring hypothesis generation. They’d check for patterns: time of day, user characteristics, payment methods, geographic distribution. They’d understand that 3% failure rate suggests a specific condition triggering the bug, not a general system problem. They’d know which assumptions in the codebase to question.

The junior engineer, trained to rely on AI suggestions, lacks the mental models to generate novel hypotheses. The AI tool, trained on common patterns, cannot reason about this specific system’s implicit assumptions. The bug persists. Revenue bleeds. Customer complaints accumulate. By morning, the senior engineer diagnoses it in 20 minutes: a third-party payment gateway update changed field validation in ways the integration didn’t anticipate. The fix takes 10 minutes. The cost of the overnight failure: measurable in customer trust and revenue.

This scenario isn’t hypothetical: it’s the predictable consequence of eliminating the expertise needed to handle situations outside normal parameters. The costs become clear: extended outage, revenue loss, engineering stress, customer trust erosion. The realisation that cost savings created operational liability.

This Is Bubble Dynamics

History provides uncomfortable precedents. The patterns repeat with remarkable consistency.

The dotcom bubble peaked in March 2000 when NASDAQ hit 5,048.62. It collapsed 76.81% by October 2002, reaching 1,139.90. Full recovery took 15 years, achieved in April 2015. Silicon Valley alone lost 200,000 jobs from 2001 to 2004.

Princeton research documented that engineers who started careers during the boom experienced 9.9 percentage points lower wage growth than managers. Only STEM workers suffered this penalty: business and management roles were immunised. The skills learned during the boom depreciated faster than normal, creating lasting human capital loss.

One analysis captured the pattern: “In the space of a decade, a group of people had gone from being young upstarts who ‘got it,’ to masters of the universe who were transforming the world, to completely redundant.”

The backlash was severe. Rampant offshoring, devaluation of the IT industry as a whole, diminished salaries and opportunities for everyone. The correction took 3 years of what developers described as “dark, dark times.”

Big Data and Hadoop followed a compressed timeline. By 2015, Big Data disappeared from the Gartner Hype Cycle. Hadoop expertise became worthless when cloud solutions emerged. Companies had spent extensively investing in Hadoop tools, training people to use them, building infrastructure. Some really needed it. Many didn’t. The correction timeline: 3 to 4 years from peak to recalibration.

Data Science provides the most recent parallel. The term was coined in 2008 by LinkedIn and Facebook executives. Google’s Chief Economist declared in 2009: “The sexy job in the next ten years will be statisticians.” Harvard Business Review named it the “sexiest job of the decade” in 2011. By 2019, the field had moved through the entire Gartner hype cycle.

Entry-level saturation followed predictably. Candidates per position grew from 20 to more than 100. One hiring manager received 500 resumes for a single opening. Bootcamp graduates discovered the promised six-figure salaries were false. The correction timeline: 7 to 8 years from peak hype to clear market saturation.

The critical distinction: technologies that deliver real economic value don’t create lasting expertise devaluation. Cloud and AWS expertise remained valuable because the underlying value was genuine. Technologies where hype exceeded value created rapid expertise devaluation: Hadoop, ICOs, generic “big data.”

The question for AI: is the hype exceeding the value? The 80% failure rate and METR productivity data suggest yes.

The Age Factor

Tech workers experience age bias at 29, compared to 41 across other industries. Only 6% of developers are over 45. Stack Overflow found 70% of developers are under 35; only 5% are over 50.

The EEOC reported that age discrimination charges in tech represent 20% of all discrimination charges in the sector, versus 14.8% cross-industry. In 2024, 16,223 age discrimination charges were filed, up nearly 2,000 from the previous year. 61% of tech workers over 45 report that age affects their employability. 41% have encountered age discrimination directly.

The employment consequences are measurable. Age 57 is where employers consider candidates “too old to hire.” Developers over 40 take 3 months longer to find employment than younger counterparts. When workers over 50 find new jobs, they earn only 50% of their previous salary on average.

The “AI native” preference recycles debunked assumptions. Research has found no evidence that younger workers are better with technology when properly studied. The LSE found that older workers, particularly those socially advantaged, are often more tech-savvy than younger but socially disadvantaged workers.

The assumption that recent graduates possess superior technical foundations contradicts practical experience. University syllabuses typically lag current industry standards by years. A graduate engineer usually takes significantly longer to ramp up to expected capabilities than someone already in the industry, or even self-taught developers who’ve been solving real problems. The “fresh out of university” advantage is largely mythical: what recent graduates possess is knowledge of outdated patterns and theoretical frameworks that require substantial unlearning before they become productive.

NC State University, examining the “AI natives” concept in July 2025, stated simply: “There’s no compression algorithm for experience.”

Yet the bias persists. 42% of HR decision-makers report pressure to hire younger candidates. The discrimination has become subtle: “outdated,” “not a cultural fit,” “lacks innovative thinking.” The University of Gothenburg found that anyone over 35 is considered “old” in tech, an age that’s typically mid-career elsewhere.

Legal cases are establishing precedent. EEOC v. iTutorGroup in 2023 marked the first AI discrimination case. The recruiting software was programmed to automatically reject women over 55 and men over 60. The discrimination was discovered when an applicant resubmitted the same resume with a younger birthdate and received an interview. The settlement: $365,000 for more than 200 applicants. The ruling established that employers cannot avoid liability by claiming a third party designed the AI.

Mobley v. Workday represents the largest ongoing case. The plaintiff, a Black man over 40, applied to more than 100 jobs using Workday’s screening system and was rejected every time. On 16 May 2025, the case was granted conditional certification as a nationwide collective action under the Age Discrimination in Employment Act. The scope includes all job applicants over 40 denied employment through Workday’s platform since September 2020. The case could establish that both HR technology vendors and their customers are liable for algorithmic bias.

Stack Overflow 2025 data shows that developers with 10 to 19 years of experience are most likely (84%) to cite “increased productivity” as a benefit of AI tools. Less experienced developers prioritise “speed up learning” and “efficiency.” Senior developers focus on productivity outcomes. The data suggests experience shapes how developers use AI, not that experience prevents AI adoption.

The “AI native” preference isn’t about capability: it’s existing age bias in new clothes.

The Correction Is Coming

Correction mechanisms are forming from multiple directions.

The EU AI Act became law in August 2024, with full enforcement beginning August 2026. Article 14 requires providers to create “technical and operational conditions for effective oversight.” Article 26 mandates that deployers “shall assign human oversight to natural persons who have the necessary competence, training and authority, as well as the necessary support.”

AI literacy requirements under Article 4 became enforceable in February 2025, mandating training on “technical, legal, ethical and safety-related aspects of AI.” The penalties: up to EUR 35 million or 7% of worldwide annual turnover.

Other jurisdictions are following. 38 US states enacted approximately 100 AI measures in 2025. The UK is implementing AI Officer designation requirements with a statutory code due autumn 2025. Australia proposed 10 mandatory guardrails for high-risk AI in September 2024.

These aren’t theoretical requirements: they explicitly mandate qualified personnel with demonstrated competence. The “Junior + AI” strategy becomes a regulatory liability in Europe from August 2026.

Market signals are shifting. Gartner’s 2025 Hype Cycle shows GenAI entering the “Trough of Disillusionment,” whilst AI Agents sit at the “Peak of Inflated Expectations,” next to fall. Gartner’s assessment: “The 2025 Hype Cycle underscores that AI is at the inflection point between heady promise and industrial reality. Generative AI’s plunge into the trough is not a setback but a sign of healthy maturation.”

McKinsey and BCG data show that AI leaders achieve 1.5 times higher revenue growth, 1.6 times greater shareholder returns, and 1.4 times higher returns on invested capital. High-maturity organisations achieve 3 times higher ROI. The key differentiator: 70% of successful AI transformation effort goes to upskilling people, updating processes, and evolving culture, not technology.

33% of organisations cite “limited AI skills and expertise” as their top barrier to AI adoption. 50% report lacking AI and machine learning expertise, unchanged from 2024. As organisations move past experimentation to production deployment, the expertise gap becomes the binding constraint.

The talent pipeline is collapsing. AWS CEO Matt Garman warned: “If you stop hiring juniors today, in 10 years you’ll face a serious experience gap. Senior engineers eventually retire or move on; you need the next generation ready to step up.”

The mentorship crisis is documented: “The sharpest divide between Big Tech and startups is in mentorship. Big Tech can afford to pair juniors with mentors. Startups, racing to ship features, often can’t. Instead of seniors guiding juniors, many rely on AI tools as a substitute, leaving juniors without the traditional apprenticeship that builds careers.” I’ve explored this challenge extensively in my analysis of mentoring developers when AI writes half their code.

IDC forecasts the developer shortage growing from 1.4 million in 2021 to 4.0 million in 2025. The US specifically faces a 1.2 million developer shortfall by 2026. 545,000 current engineers are expected to exit the industry by 2026. When 73% of entry-level hiring disappears whilst half a million experienced engineers leave, the expertise crisis becomes structural.

Brain drain accelerates the timeline. More than 180,000 tech workers were laid off in 2025 across more than 400 companies. Return-to-office mandates are driving additional departures. AI labs are actively poaching experienced engineers: Anthropic has been particularly successful at recruiting senior researchers and engineers from Google, Meta, Microsoft, Amazon, and Stripe.

The correction mechanisms are predictable: regulatory requirements, competitive differentiation favouring expertise, talent pipeline collapse, and productivity reality diverging from perception. The timeline: 12 to 24 months for the first visible corrections, similar to the Big Data cycle. Longer for full recovery, as the dotcom experience showed.

What This Means Practically

For senior engineers:

Build your “judgement portfolio” through systematic documentation:

  • Architectural decisions: Explain why you chose approach A over B and C, including trade-offs and constraints
  • Incident post-mortems: Trace from symptoms through hypothesis generation to root cause
  • Architecture reviews: Show how you identified future scaling problems before they materialised
  • Mentorship outcomes: Document how your guidance developed others

Develop AI collaboration skills with emphasis on oversight:

  • Learn to recognise when AI suggestions are plausible but wrong
  • Build frameworks for evaluating AI-generated code: edge cases, team conventions, system integration, performance requirements
  • Focus on evaluation skills, not just generation
  • Apply principles of leading without authority as structures flatten

Finding Organisations That Value Expertise

Not all companies are pursuing the “Junior + AI” strategy. Some recognise that competitive advantage comes from technical sophistication, not cost minimisation. The challenge is identifying them before you invest time in applications.

Regulated industries represent the clearest signal. Financial services, healthcare, automotive, aerospace, and critical infrastructure face regulatory requirements that explicitly mandate qualified personnel. The EU AI Act’s Article 26 requirement for “necessary competence, training and authority” isn’t optional for these sectors. Companies in these industries cannot adopt “Junior + AI” models without violating regulatory requirements, creating structural demand for senior expertise.

Banking and fintech organisations competing on transaction reliability, fraud detection accuracy, or regulatory compliance cannot afford the 18-month technical debt cliff. Healthcare technology companies building diagnostic tools or patient management systems face liability for failures that junior engineers paired with AI tools are poorly positioned to prevent. Automotive software teams working on safety-critical systems require expertise that cannot be substituted with AI assistance.

Look for organisations with mature engineering cultures. These reveal themselves in job postings and interview processes. Companies that value expertise ask about debugging complex production issues, architectural trade-offs in ambiguous situations, and experience mentoring junior developers. They want to understand how you’ve handled novel problems, not how quickly you can implement well-defined features.

Contrast this with organisations focused on AI tool familiarity, coding speed metrics, or “AI native” experience. These signal a belief that engineering is primarily about feature velocity rather than system reliability, maintainability, or architectural coherence. Job descriptions emphasising “fast-paced environment,” “move fast and break things,” or “AI-powered development” often indicate a culture that undervalues the judgement expertise provides.

Businesses competing on technical sophistication, not cost, need senior engineers. Companies building developer tools, infrastructure platforms, or technical products for technical buyers cannot hide poor architecture or unstable systems behind marketing. Their customers evaluate technical quality directly. These organisations understand that cutting costs on engineering expertise damages their core competitive position.

As I’ve argued in my analysis of hiring engineers in 2025, organisations should test AI collaboration skills, not avoidance. Companies that understand this distinction are signalling they value judgement about when and how to use AI tools, not blind dependency on them.

Interview processes reveal organisational values. Companies that value expertise conduct technical interviews focusing on problem diagnosis, trade-off evaluation, and system design under constraints. They ask you to debug unfamiliar code, explain how you’d investigate a production incident with incomplete information, or design a system where the requirements conflict. These questions assess judgement that AI tools cannot substitute.

Companies undervaluing expertise conduct interviews testing algorithmic knowledge, coding speed, or familiarity with specific AI tools. LeetCode-style problems optimised for recent graduates, timed coding challenges, or questions about prompt engineering techniques signal an organisation that believes engineering is primarily about implementation speed.

The market is bifurcating. One segment is pursuing cost reduction through “Junior + AI” models, accepting the hidden costs and betting they can manage the consequences. Another segment recognises that AI tools amplify expertise rather than substitute for it, and that competitive advantage comes from combining both effectively.

The correction will favour the second group. Positioning yourself with organisations that already understand this distinction means you won’t need to wait for the market to revalue expertise: you’ll be working somewhere that never devalued it.

For companies:

Recognise the hidden costs with specific accounting:

  • True AI adoption costs run 3 to 4 times apparent costs
  • Include technical debt servicing, quality issues requiring rework, senior expertise needed to make AI tools effective
  • Calculate actual cost per feature: development time + code review overhead + testing effort + documentation work + maintenance burden
  • Track technical debt accumulation rates relative to feature delivery
  • The 18-month cliff is predictable: when accumulated debt constrains velocity more than AI accelerates it

Maintain the mentorship pipeline with intentional structure:

  • No juniors today = no seniors in 10 years (AWS CEO’s warning isn’t theoretical)
  • Engineering expertise develops through apprenticeship taking years to decades
  • “Junior + AI” can work if juniors are paired with seniors who actively mentor
  • Deliberate mentorship required: code reviews explaining reasoning, architecture discussions exposing trade-offs, incident responses demonstrating diagnostics
  • Calculate true cost of senior departures: institutional knowledge, architectural understanding, novel problem diagnosis

Measure actual productivity, not perception:

  • METR revealed a 39-percentage-point gap between expected and actual productivity
  • Implement rigorous measurement:
    • Compare completion time for equivalent tasks with/without AI
    • Measure quality through defect rates and technical debt accumulation
    • Track how much AI-generated code survives code review unchanged
  • If leadership believes AI made engineers 20% more productive when measurement shows 19% slower, every decision based on that belief creates liabilities

Plan for regulatory compliance (EU AI Act enforcement: 2 August 2026):

  • Article 26 mandates oversight by personnel with “necessary competence, training and authority”
  • Article 14 requires “technical and operational conditions for effective oversight”
  • Penalties: up to 7% of worldwide annual turnover
  • Requirements extend beyond having qualified people:
    • Demonstrate their competence
    • Document their training
    • Show they have authority to override AI recommendations
  • “Junior + AI” model without demonstrated competence creates regulatory liability

Hire expertise before the correction makes it expensive:

  • Correction timeline: 12 to 24 months (based on regulatory enforcement dates and Gartner’s hype cycle)
  • Hiring senior engineers now costs less than during a talent shortage
  • Organisations maintaining capability through the trough gain competitive advantage
  • Waiting until the correction is obvious means competing for limited talent at premium prices

For the industry broadly, we face a systemic risk that individual responses cannot fully address.

The talent pipeline is collapsing at entry, middle, and senior levels simultaneously. Entry-level hiring down 73%. Mid-career engineers laid off or leaving. Senior engineers discounted or retiring. This creates a discontinuity that will take a decade to repair once the market recognises the problem.

Looking Forward

The AI PhD delivering groceries represents more than personal misfortune: it’s a market signal. When genuine expertise becomes unemployable during a technology boom, bubble dynamics are operating.

The pattern is familiar from dotcom, Big Data, and Data Science cycles. Expertise gets devalued when hype exceeds underlying value. Corrections occur when reality diverges sufficiently from perception that market mechanisms force revaluation. The timeline varies: 1 year for ICOs, 3 to 4 years for Big Data, 15 years for dotcom’s full recovery.

The METR study provides the critical evidence: the perception-reality gap exists and it’s large. Experienced developers work 19% slower with AI but believe they’re 20% faster. Companies are making strategic decisions based on the perception whilst accumulating costs based on the reality. This divergence cannot persist indefinitely.

The correction mechanisms are forming. Regulatory requirements mandating qualified personnel. Competitive differentiation favouring organisations that combine AI with expertise. Talent pipeline collapse creating structural shortage. Project failure rates forcing organisations to examine why 80% of AI initiatives don’t deliver expected value.

The question isn’t whether correction occurs: it’s how much damage accumulates first. Companies eliminating senior engineers whilst hiring “AI native” juniors are accumulating technical debt, operational risk, and regulatory liability. The 18-month cliff represents the timeline when these costs become visible. The talent pipeline collapse represents the timeline when they become permanent.

I’m observing this through my consulting work. Enquiries have declined, not because organisations don’t need help with AI adoption or operational excellence, but because they believe AI tools eliminate the need for human expertise. This represents a category error that will correct, but the correction timeline affects individual careers in immediate ways.

The resigned humour of the AI PhD turned delivery driver captures the paradox better than anger could. The irony is precise: genuine AI expertise has become unemployable during an AI boom because companies prefer the appearance of AI fluency over the substance of AI understanding.

Bubbles correct. Expertise will be revalued. The challenge is maintaining capability through the period when the market believes otherwise.

If you’re an engineering leader questioning whether the “Junior + AI = Senior” assumption holds, or an organisation preparing for EU AI Act compliance, these aren’t hypothetical concerns: they’re strategic decisions with measurable consequences. I’m happy to discuss what rigorous AI adoption looks like in practice.

The wait became a discount. The discount will become a correction. The question is whether your organisation is positioned for what comes next.


Need support navigating this transition? Wyrd Technology works with organisations developing hiring strategies that value expertise alongside AI adoption, engineering leaders building effective mentoring and coaching practices during technological change, and senior engineers positioning themselves for career progression in this evolving market. If you’re grappling with the strategic questions raised in this article, let’s talk.


About The Author

Tim Huegdon is the founder of Wyrd Technology, a consultancy focused on helping engineering teams achieve operational excellence through strategic AI adoption. With more than 25 years of experience in software engineering and technical leadership, Tim specialises in human-AI collaboration patterns, systematic AI adoption strategies, and building engineering cultures that thrive during technological transitions.

Tags:Age Discrimination, AI, AI Collaboration, Bubble Dynamics, Career Development, Engineering Leadership, Engineering Management, Future of Work, Human-AI Collaboration, Hype, Mentorship, Productivity, Talent Acquisition, Technical Debt, Technical Hiring, Technical Strategy