Engineering Starts Before the First Line of Code
Published:
When you bring an engineer onto a complex project, you don’t just throw them code and say “figure it out”. You do discovery. You create documentation. You break work into clear tasks. I treated Claude Code the same way whilst building an event-driven pipeline.
I just completed a multi-component ETL system: event-driven architecture, asynchronous processing, scheduled aggregations requiring timezone-aware calculations, atomic counter operations to prevent race conditions under concurrent load. The kind of distributed infrastructure complexity where amateur approaches collapse under pressure.
What makes this interesting isn’t the technology choices. It’s that I treated Claude Code exactly like an engineer joining my team. Same practices I’ve used for decades. Same discipline that makes distributed engineering teams effective.
Those practices aren’t human-specific. They’re engineering fundamentals. Discovery, documentation, task decomposition, TDD, governance. Our job isn’t to write code. It’s to deliver value to users. We do that through effective planning, design, and communication that provides flexibility.
Here’s what professional engineering discipline looks like when your implementation partner is an AI.
The foundation started before a single line of code was written. Because our job starts before code.
Living Documentation as Decision Tool
The PR-FAQ pattern (write the press release and FAQ before building anything) demonstrates a powerful principle. Let the document evolve as you learn. Use it to guide all decisions throughout development. It’s a living document, not a static specification.
I applied the same pattern, simplified for my specific requirements:
-
Discovery workshop produced the architecture document. Not a comprehensive specification, but a system-level guide. Architectures, designs, patterns that leave flexibility for implementation details. Technology choices with rationales. Phased delivery plan. Success criteria for each phase. Most importantly: documented decisions using Architecture Decision Records (ADRs) that captured the “why” behind choices.
-
CONTRIBUTING.md established governance. Standards, practices, quality gates. TDD mandatory. Zero tolerance typing. Testing patterns. The Boy Scout Rule applied throughout. Not aspirational guidelines but explicit requirements with directive language.
-
ADRs preserved decision rationales. Why event-driven architecture instead of polling. Why shared codebase between API and Lambda functions instead of separate projects. Why specific patterns for specific problems.
All three stayed living. Updated as complexity emerged. Referenced constantly by Claude Code, exactly like team members reference wikis and ADRs.
Why This Mattered for AI Collaboration
Claude Code had no institutional knowledge. No memory of yesterday’s architectural discussion. No understanding of team culture or implicit standards.
The documents became the single source of truth across sessions:
- Architecture decisions documented once, referenced many times
- ADRs answered “why” questions without me repeating context
- CONTRIBUTING.md made implicit expectations explicit
- Living updates kept Claude Code aligned as the project evolved
This isn’t an AI-specific pattern. It’s professional documentation practice that benefits human teams and AI collaboration equally.
The Discipline That Made It Work
Every time we learned something, we updated the docs before writing new code. Not busywork. Strategic investment that kept implementation aligned with design.
Example: Timezone requirements emerged mid-project. We didn’t hack timezone logic into existing code. We updated the architecture document with timezone design decisions. Created a new ADR documenting the rationale. Then implemented following the updated design.
The discipline: documentation updates before implementation when learning occurs. This kept Claude Code working from current understanding, not outdated assumptions.
This is the living document pattern in action. PR-FAQ evolves through the working backwards process. Documents capture learning, not just planning. Living docs guide implementation decisions. Not waterfall. Informed iteration.
Architecture document and ADRs followed the same principle:
- Captured architectural decisions as they were made (ADRs for “why”, architecture doc defines architectures, designs, and patterns)
- System-level thinking, not deep implementation details. This leaves flexibility for implementation
- Evolved with project understanding
- Guided every implementation session
- Made decisions explicit and searchable
But living documents only work if you start with proper discovery.
Discovery Before Code
Our responsibility is discovery and design. This is where engineers deliver value, not in code production.
The engineering value proposition is straightforward:
- Our job isn’t writing code. It’s solving problems for users.
- Discovery work identifies what will actually deliver value. Understanding user needs, exploring solution spaces, making informed technology choices.
- Design work determines how to deliver it effectively. Architecture that supports change, systems that handle real-world conditions, interfaces that serve user workflows.
- Code is the implementation detail, not the deliverable.
I spent several days on comprehensive discovery with Claude Code exploring the problem space. Discussing ideas, comparing technologies, figuring out what made most sense for our requirements. Not writing specifications. Professional discovery workshops that produced working documents.
What Discovery Produced
The outputs were concrete and actionable:
- Architecture document with decision rationales. System-level thinking defining patterns and structures, not implementation prescriptions. Not “we’ll use EventBridge” but “EventBridge for async operations because it provides immediate processing, simpler error isolation, and scales automatically (versus polling which adds latency and complexity).”
- Phased delivery plan. Domain foundation first (entities, data access, business logic). Then event infrastructure (event triggers, handlers, async processing). Finally ETL transformation logic (domain transformations, atomic operations, scheduled aggregations).
- Technology choices with explicit tradeoffs. Single table DynamoDB design decided upfront with documented reasoning. Shared codebase strategy between API and Lambda functions with ADR capturing why, and how we might refine this in the future.
- Success criteria for each phase aligned with user value. Not just “implement repositories” but “atomic counter operations verified under concurrent load so users get accurate data.”
Time Investment Versus Value Delivered
Several days of discovery investment prevented weeks of architectural backtracking. Here’s the concrete return:
-
Event-driven architecture decision (documented in ADR) prevented a polling approach that would have required major refactor later. Discovery investment equals better user outcomes with less rework.
-
Single table DynamoDB design decided upfront avoided migration complexity that would have emerged from naive table-per-entity approach.
-
Shared codebase strategy prevented code duplication and version skew between API and Lambda functions.
The pattern is consistent: front-load thinking to deliver better value. Engineers delivered value through thinking, not just coding. Discovery enabled confident decision-making focused on user needs throughout implementation.
Discovery created the plan. Governance ensured we followed it.
Documentation as Governance
Every Claude Code interaction knew the rules. No debates about “should we write tests?” Standards weren’t negotiable. Quality gates enforced from the start. Claude Code operated within established guardrails. Consistency across all AI-generated code.
Governance isn’t about control. It’s about clarity. Three layers provided that clarity for both human and AI developers.
The Three-Layer Governance Model
CONTRIBUTING.md: Standards and Practices
The commandments were explicit:
- TDD mandatory (Red-Green-Refactor for all code)
- Zero tolerance typing (100% type hints in Python, strict mode in TypeScript)
- Quality gates must pass (linting, typing, test runs)
- Testing patterns specified (BDD structure, function-based tests)
- Boy Scout Rule enforced (leave code better than you found it)
Compass Pattern: Navigational Documentation
Following the pattern from my previous writing on documentation architecture, documentation served as reference, not duplication:
- Architecture document as navigational tool, not comprehensive spec
- Living documentation updated as learning occurred
- Single source of truth for technical decisions
- Claude Code references canonical sources efficiently
ADRs: Decision Rationales
Architecture Decision Records preserved the “why” behind choices:
- Event-driven architecture for immediate processing and simpler error isolation
- Shared codebase to prevent duplication and version skew
- Specific patterns for specific problems with documented tradeoffs
- Rationales preserved for future reference when questioning decisions
How Governance Worked in Practice
Standards enforcement was straightforward because everything was documented with directive language.
CONTRIBUTING.md made implicit expectations explicit. When Claude Code generated repository methods, it knew: 100% type hints, proper error handling, test-first development. Because CONTRIBUTING.md said so with “MUST” and “MANDATORY” language, not because I repeated myself every session.
Quality gates caught violations immediately. Linting, typing checks, test runs. No “we’ll fix it later” technical debt.
The Compass Pattern kept documentation maintainable. References to canonical sources instead of duplication. Architecture decisions documented once, referenced many times. ADRs provided context for “why” questions without archaeological digs through old commit messages.
The Boy Scout Rule applied throughout. Whilst adding features, we improved existing code. Whilst refactoring, we enhanced documentation. Small improvements compounded over time.
What This Enabled
Consistency across sessions. No quality drift as the project evolved from Phase 1 through Phase 3. Standards enforced regardless of implementation partner (human or AI). Claude Code couldn’t violate documented standards. AI-generated code integrated seamlessly with human-written code.
Velocity without chaos. No debates slowing down development. Quality built in, not bolted on after the fact. Refactoring safe because comprehensive tests existed. Documentation provided answers, not more questions.
Professional discipline maintained. Governance prevented shortcuts under pressure. Type safety non-negotiable even for complex code. Test coverage mandatory regardless of timeline. Standards consistent throughout.
The key insight: governance isn’t about control, it’s about clarity. CONTRIBUTING.md, Compass Pattern, and ADRs provided that clarity for both human and AI developers.
But governance only works if work is properly decomposed.
Structure Enables Flexibility
Effective planning and communication enable flexibility. This is how engineers maximise value delivery.
Our engineering responsibility is designing for flexibility:
- Our job is to create options, not lock in decisions. Task decomposition enables re-prioritisation when learning occurs.
- Clear communication keeps everyone aligned as requirements evolve. Documentation, task management, success criteria all serve communication.
- Structure allows responding to user needs without chaos. Good design provides the flexibility to adapt.
The Phased Approach
Work broke into three clear phases with defined boundaries:
-
Phase 1: Domain Foundation
Build the core: entities, data access patterns, business logic. Everything downstream depends on solid domain modelling.
-
Phase 2: Event Infrastructure
Build async processing: event triggers, handler functions, infrastructure deployment. Can’t test integration without deployed infrastructure.
-
Phase 3: ETL Transformation Logic
Build the processing: domain transformations, atomic operations, scheduled aggregations. Where complexity becomes real.
Each phase had dependencies mapped explicitly. Each task within phases had clear success criteria.
Task Structure Example
Every task followed the same structure:
- What to Deliver: Clear, bounded scope aligned with user value
- Success Criteria: Explicit, verifiable outcomes focused on user needs
- Dependencies: What must be complete first
- Result: Flexibility to adapt without scope creep
Handler naming mismatch caught in testing demonstrates the value. The code had one function name, infrastructure configuration expected another. Would have been a runtime error in production. Caught during integration testing because testing was part of task success criteria. Good design prevented user-facing failures.
Living Task Management
This is where AI collaboration showed unique value. Claude Code updated task management tools (GitHub Issues and Projects) as we worked. Documentation, implementation, and tracking stayed synchronized automatically.
Tasks added and re-prioritised as discoveries occurred. Responding to learning whilst delivering value.
Timezone support: Discovered mid-project that scheduled aggregations needed timezone awareness. Assessed user value (high, users expect local time, not UTC). Created new task. Re-prioritised. Implemented cleanly through proper task boundaries.
Idempotency requirements: Emerged during integration thinking. User impact clear (system failures could duplicate data). Created explicit task. Marked critical. Implemented before production.
Cost optimization: Value analysis revealed some nice-to-have features didn’t justify implementation cost. Lower-priority tasks deferred. Resources focused on user needs.
Being Agile Versus Doing Agile
The difference is professional judgment:
- Responded to change through systematic task management. Structure enabled flexibility.
- Delivered value incrementally. Users got working features, not perfect features.
- Professional judgment throughout: What delivers most value? Not “what’s in the backlog.”
- Engineers as designers and communicators, not just coders. Discovery, design, communication: our actual job.
Task decomposition provided structure. Here’s how it held under real complexity.
When Complexity Met Discipline
This is where amateur approaches collapse. Professional discipline compounds as complexity increases.
Phase 1: Building Confidence Through TDD
Phase 1 focused on domain foundation: entities, mappers, repositories, services. Relatively straightforward domain modelling. TDD from the very first line of code.
TDD shaped design through Red-Green-Refactor cycles. Not testing after implementation. Designing through tests.
Example: Analytics entities needed calculated properties (percentages, scores, averages) derived from atomic counters. The TDD rhythm forced good design:
- Red: Write test for minimum value scenario. Test fails (property doesn’t exist).
- Green: Implement minimal property logic. Test passes.
- Red: Write test for maximum value scenario. Test fails (logic too simple).
- Green: Generalise the logic. Both tests pass.
- Red: Write test for empty dataset edge case. Test fails (division by zero would raise exception).
- Green: Add edge case handling with safe division. All tests pass.
- Refactor: Extract calculation logic, improve names. Tests still pass, confirming behaviour unchanged.
The result: comprehensive test coverage emerged naturally from TDD discipline. Edge cases considered before implementation. Clean, testable design. Zero rework required.
Phase 1 established the TDD rhythm with Claude Code. Quality gates working smoothly. Documentation references preventing duplication. Confidence building for harder phases ahead.
Phase 2: Distributed Complexity Emerges
Phase 2 introduced event-driven infrastructure. EventBridge, Lambda handlers, asynchronous processing. Integration between components that don’t share memory.
Task decomposition became critical here:
Sequential dependencies made explicit:
- EventBridge infrastructure must exist before handlers can be tested
- Event schemas must be defined before publishers and consumers written
- Lambda deployment must work before integration testing possible
The win from structure came during integration testing. Handler function name in code didn’t match infrastructure configuration. Would have caused runtime error in production. Caught during integration testing. Fixed in 15 minutes because testing was part of task success criteria.
What could have failed without decomposition: building handlers before infrastructure (deployment failures), writing integration code before schemas defined (runtime type errors), deploying without testing (production discovery of mismatch). Task boundaries and success criteria prevented all of these.
How practices scaled in Phase 2:
- Living docs: EventBridge architecture decisions (documented in ADRs) referenced throughout the phase
- ADRs as guidance: “Why EventBridge not polling?” The ADR had the answer.
- TDD: Handler functions test-driven, integration verified
- Governance: Error handling patterns from CONTRIBUTING.md applied consistently
Phase 2 taught us that distributed systems need clearer boundaries. Integration testing catches architectural issues. Documentation references prevent reinventing decisions. Structure enables confident asynchronous development.
Phase 3: Where It Got Real
Phase 3 brought full system complexity: external data ingestion and parsing, domain transformations, atomic counter operations, scheduled aggregations requiring timezone-aware calculations, idempotency requirements for retry handling.
This is where amateur approaches collapse. Professional discipline compounds.
Discovery 1: Timezone Requirements
Realised mid-project that scheduled aggregations needed timezone awareness. Aggregations must run at user-local times, not UTC. Date-based metrics needed to align with user timezones.
The amateur response: hack timezone logic into existing code. Rush implementation without tests. Hope it works.
The professional response was systematic:
- Assess user value (high, users expect correct local time)
- Create new task with clear acceptance criteria
- Update architecture document with timezone design
- Implement via TDD (test timezone conversions before using them)
- Integrate through defined interfaces
Result: clean implementation, fully tested, properly documented.
Why this worked: engineering responsibility is understanding user needs, then designing solutions. Task decomposition allowed new requirements to become new tasks. TDD ensured timezone logic correct before integration. Living documentation captured design decisions. Claude Code had clear specification to implement against.
Value delivered: Users get correct timezone-based aggregations. Our job is solving their problem right.
Discovery 2: Atomic Operations Under Concurrent Load
The challenge emerged from system architecture. Multiple Lambda invocations processing data concurrently. Same data could appear in multiple operations processed simultaneously. Read-modify-write pattern would lose updates through race conditions. This would corrupt user data.
Engineering responsibility: design for correctness under real-world conditions.
TDD approach: tests verified atomic behaviour first. Simulated concurrent Lambda invocations. Each updating same counter. Verified final count equals sum of all increments. Without atomic operations, updates get lost. With atomic operations, perfect accuracy.
Implementation followed tests:
- Repository methods using database atomic operations
- No read-modify-write pattern (race conditions eliminated)
- Calculated fields derived from atomic counters (always consistent)
Result: comprehensive test suite validating concurrent behaviour. No race conditions in production. Confidence in concurrent processing. All verified through tests before deployment.
Value delivered: Users get accurate data. Our job is building reliable systems.
Discovery 3: Idempotency Requirements
Discovered during integration thinking: Lambda retries need idempotency. Re-processing same data must produce identical results. Can’t double-count. System failures could duplicate user data.
Engineering responsibility: anticipate failure modes, design for resilience.
Created explicit task marked “MUST complete before production”. Defined acceptance criteria: re-process same data, verify identical state. Implemented idempotency checks via TDD. Verified through comprehensive test scenarios.
Value delivered: System handles failures gracefully. Our job is designing robust systems.
How All Practices Held Together
The compound effect demonstrated throughout Phase 3:
-
TDD: Every new complexity had tests first. Atomic operations verified before implementation. Timezone calculations tested with edge cases. Idempotency proven through test scenarios. Regression prevention built-in throughout.
-
Living Documentation: Architecture evolved with discoveries. Timezone requirements led to updated docs, then implementation followed design. Cost decisions documented with rationales for future reference. Not “figure it out in code”. Professional design process.
-
Task Decomposition: Complexity managed through clear boundaries. Each new discovery became proper task. Dependencies explicit and respected. Progress measurable, completion verifiable. Adaptation possible because structure existed.
-
Governance: CONTRIBUTING.md prevented shortcuts under pressure. Quality gates enforced regardless of “just ship it” temptation. Type safety non-negotiable even for complex code. Test coverage mandatory for all new functionality. Standards consistent from Phase 1 through Phase 3.
-
Discovery Decisions: Early thinking paid continuous dividends. Event-driven architecture supported concurrent processing perfectly. Single table design simplified multi-entity operations. Technology choices validated under real load. Architectural decisions from Day 1 still sound in Phase 3.
We never had a moment of “the wheels are coming off.” Professional practices (discovery, design, communication) prevented the chaos that complexity usually brings. Engineering discipline delivered user value throughout.
Engineering Practices, Not AI Practices
The core insight is straightforward: these are engineering practices, not AI practices. They work because they’re fundamentally sound, not because AI requires special treatment. Seven patterns emerged from this project that apply equally to human teams and AI collaboration.
Discovery Pays Dividends
Time invested in discovery pays continuous dividends. Our job is understanding problems before solving them, and the several days invested in discovery paid off throughout all three phases of implementation. Architectural decisions made once got referenced many times.
The EventBridge decision prevented a polling approach that would have required major refactoring later. Single table design simplified all subsequent data operations. Shared codebase strategy prevented duplication across API and Lambda functions. Understanding the problem deeply enabled fast implementation later without any architectural backtracking.
This applies to AI collaboration the same as distributed human teams. AI needs the explicit context that discovery provides.
Living Docs Provide Continuity
Documentation that guides decisions rather than just recording them became essential throughout the project. We updated the architecture document at the system level, defining patterns and approaches whilst leaving implementation flexibility. Every major decision referenced existing docs, updated them, then used the updated version.
This wasn’t planning every detail upfront, and it wasn’t figuring it out in code without documentation. The middle path worked: system-level thinking captured, updated as learned, implementation following design patterns with flexibility for details.
This matters for AI collaboration because AI has no memory across sessions. Living docs provide continuity. Claude Code could resume any session by reading current architecture. References to canonical sources prevent duplication, following Compass Pattern principles.
Standards Enable Velocity
Explicit standards enabled velocity rather than reduced it. CONTRIBUTING.md, referenced throughout the project, meant zero debates about whether to follow practices. Quality got built in from the start with zero compromises on TDD, typing, or quality gates. The result was consistent quality with no technical debt accumulation.
Standards documented once became referenced every session. “MUST follow patterns in…” became routine. Quality gates enforced standards automatically through linting, typing, and test runs.
AI collaboration particularly benefits from this because AI needs directive language. “MUST” and “MANDATORY” ensure compliance, preventing drift as AI generates code. Explicit standards free engineers to focus on user value instead of debates about practices.
Structure Allows Flexibility
The counterintuitive truth proved itself throughout: structure allows flexibility, it doesn’t restrict it. Clear boundaries made adaptation possible. When timezone requirements emerged mid-project, they became a new task and got implemented cleanly. When idempotency requirements appeared during integration thinking, they became an explicit task and got properly delivered.
New requirements became new tasks rather than scope creep. Progress stayed measurable and completion verifiable. Without clear boundaries, everything becomes fluid chaos.
AI works best with bounded tasks where clear success criteria eliminate ambiguity and dependencies stay explicit so AI doesn’t build in the wrong order. Our job is creating options through effective planning and design, not locking in decisions.
TDD Shapes Thinking
Test-Driven Development shaped how we thought about implementation. Our responsibility is thinking, not just coding, and Red-Green-Refactor proved to be a design process rather than a testing activity. Calculated properties had edge cases tested before implementation. Atomic operations had concurrent update scenarios verified through tests first. Tests drove design rather than validating it after the fact.
The rhythm of dozens of Red-Green-Refactor cycles per feature created this design process. Each cycle followed the same pattern: fail, minimal pass, refactor. Design emerged through this process whilst comprehensive coverage emerged naturally from TDD discipline.
AI can generate tests from specifications, verify AI-generated implementation, and create a safety net for AI-assisted refactoring.
Learning Through Doing
Effective planning proved to mean learning and adapting rather than perfect predictions. This wasn’t waterfall with everything planned upfront, and it wasn’t chaos with no plan. The middle path worked: discovery provided direction, implementation revealed details, documentation captured evolution, and structure enabled adaptation.
Timezone requirements discovered mid-project got properly integrated. Cost optimization got analysed, some features got deferred, and decisions got documented. Atomic operations got realized as necessary, designed properly, and implemented systematically.
We couldn’t predict every requirement upfront, but living docs kept AI aligned through changes as we adapted to discoveries.
Judgment Over Process
Professional judgment about what delivers value matters more than process compliance. Being agile differs from doing Agile, and the difference matters. “Agile” as practiced often means diluted principles with process over outcomes. Being agile means embracing change, delivering value, and learning from feedback through professional judgment about what actually delivers value.
Cost-benefit analysis led to deferred features that didn’t justify investment. Scope decisions focused on production-critical functionality through continuous value assessment. This wasn’t “build everything in the backlog” or “follow the methodology perfectly”.
AI can generate infinite code, but human judgment determines what’s valuable whilst discipline prevents building unnecessary complexity.
The Common Thread
Engineering is about delivering value through discovery, design, and communication. The same principles that make distributed teams effective apply here: clear communication, explicit standards, bounded work, quality discipline, and continuous learning.
AI amplifies these practices for better or worse. Good practices and AI multiply effectiveness whilst bad practices and AI multiply chaos. The foundation remains engineering responsibility rather than code production. Our job is solving problems for users. Code is just how we do it.
What’s Different, What’s The Same
The solution isn’t AI-specific. It’s better engineering discipline applied systematically.
What’s Actually Different With AI
Three key differences matter when AI is your implementation partner:
Can’t ask clarifying questions across sessions. Human engineer: “What did we decide about error handling last week?” AI: no memory of previous session unless explicitly documented.
The solution: everything must be explicit in documentation. Living architecture document provides continuity. Decision logs capture rationales. Not AI-specific problem. Same challenge with distributed teams across timezones.
No institutional knowledge. Human engineer absorbs team culture, unwritten patterns, implicit standards. AI only knows what’s documented.
The solution: all context must be written down. CONTRIBUTING.md makes implicit standards explicit. Architecture document captures design decisions. Forces good practice that benefits humans too.
Less forgiving of ambiguity. Human engineer interprets vague requirements, asks questions, fills gaps. AI makes assumptions if requirements unclear, may choose wrong interpretation.
The solution: directive language matters (“MUST”, “MANDATORY”). Success criteria must be explicit. Constraints documented clearly. Again, not AI-specific. Same clarity helps human developers.
But the Solution Isn’t AI-Specific
It’s better engineering discipline. Clear requirements. Comprehensive documentation. Explicit standards. Measurable success criteria.
Same challenges exist in distributed teams (no hallway conversations), offshore development (timezone gaps), remote work (asynchronous communication), open source (contributors don’t share context).
Mature practices handle all of these: documentation over oral tradition, explicit over implicit standards, written over assumed knowledge.
For Engineering Leaders
Investment in engineering practices pays off with AI. Teams that struggle with AI often struggled with distributed work. Root cause: lack of discovery and design discipline, not AI expertise.
AI exposes existing weaknesses. Teams that just write code versus teams that deliver value. The difference becomes visible when AI removes the coding bottleneck.
Fix fundamentals (planning, design, communication), AI becomes force multiplier. Strong practices and AI equal compound effectiveness. Same investment serves human and AI collaboration.
The key truth: AI forces us to document what we should have documented anyway. It makes engineering discipline visible. Discovery, design, communication become explicit requirements, not optional practices.
What Doesn’t Change
Professional engineering discipline remains constant: systematic over ad-hoc, explicit over implicit, tested over assumed, documented over tribal knowledge.
These principles pre-date AI and will outlast current tools. Good documentation helps humans regardless of AI. Clear standards enable team coordination. Proper decomposition makes work manageable. TDD produces better design.
AI is a tool that benefits from good practices, not a reason to invent new ones.
What to Do Monday Morning
The practices are clear. Here’s how to apply them.
For Individual Engineers
1. Remember your job: Deliver value to users through discovery, design, and communication. Not code production. Code is the implementation detail, not the deliverable.
2. Start with discovery workshops. Understand the problem before proposing solutions. Use AI to explore problem space. Document decisions and rationales. Capture why, not just what.
3. Create living architecture docs. Communication tool that keeps everyone aligned as you learn. System-level thinking: architectures, designs, patterns, not implementation prescriptions. Update as discoveries occur. Reference throughout implementation. Pattern: discovery, document system-level decisions, update as learned, implement with flexibility.
4. Decompose work properly. Design for options, not locked-in decisions. Clear tasks with explicit success criteria aligned with user value. Dependencies made visible. Each task independently completable.
5. Practice TDD discipline. Design before implementation. Our responsibility is thinking. Red-Green-Refactor creates better designs than code-then-test.
For Engineering Leaders
1. Audit existing practices. Do teams focus on discovery and design or just code production? Teams good at planning, design, and communication will adapt quickly to AI. Teams with weak practices will struggle.
Questions to ask: Do we have comprehensive CONTRIBUTING.md? Is our architecture documented and kept current? Do we decompose work with clear success criteria? Do we enforce quality gates consistently?
2. Invest in planning and communication as core engineering activities. Not overhead. Core professional work. Discovery identifies user value. Design creates solutions. Communication keeps everyone aligned.
3. Measure what matters. Not lines of code generated by AI. Not percentage of code AI-written. Measure value delivered to users. Time to production-ready code. Maintenance burden reduction. Technical debt prevented.
4. Build culture. Engineers are problem solvers and designers, not code factories. Our job is delivering value through discovery, design, and communication. Code is how we do it, not what we do.
5. Recognize the pattern. Teams good at discovery, design, and communication will thrive with AI. Teams that struggle with distributed work will struggle with AI. Same root cause: engineering discipline.
The Meta-Takeaway
Professional engineering discipline is your competitive advantage. Discovery, design, communication that delivers user value. Not AI tool choice. Not subscription tier. Not code generation speed.
Engineers who understand their job is solving problems, not writing code, will succeed with or without AI. The practices that made you effective before AI make you more effective with AI. They scale naturally because they’re fundamentally sound.
The uncomfortable truth: if you’re struggling with AI, the problem likely isn’t the AI. Audit your practices. Strengthen your fundamentals. AI works brilliantly when applied to professional discipline.
Bringing It Full Circle
Complex multi-component event-driven system delivered. Zero major rework required. Quality maintained throughout all phases. Complexity managed successfully through discipline. Production-ready system serving users.
What makes this different: not tips for using AI. Not theoretical best practices. Not “AI will change everything” hype. Professional practice demonstrated in action. Showing what engineering excellence looks like with AI as implementation partner.
The Proof Point
These practices worked because they’re fundamentally sound. Discovery prevents thrashing. Living documentation provides continuity. Governance prevents drift. Decomposition enables adaptation. TDD produces better design.
AI didn’t require new practices. It benefited from proven ones. The same discipline that makes distributed teams effective makes AI collaboration effective.
The Uncomfortable Truth
AI doesn’t fix bad engineering practices. It exposes them.
Teams with strong practices (discovery, design, communication) thrive with AI. Teams with weak practices struggle. The difference is discipline, not tools. Root cause: engineering fundamentals, not AI expertise.
The Opportunity
For engineers who know these practices work: you’re already prepared for effective AI collaboration. Your experience is the competitive advantage. Apply what you know. It scales to AI naturally.
For engineers building these muscles: start with fundamentals. Discovery, documentation, decomposition, TDD, governance. They work for human teams and AI partners. Investment compounds as AI capabilities improve.
Final Thought
When asked how I structure work when AI is doing half the thinking, the answer is simple: I structure it professionally.
My job is delivering value to users through discovery, design, and communication. AI is a tool that helps with implementation. The same discipline that makes distributed teams effective makes AI collaboration effective.
Because at the core, it’s not about AI. It’s about engineering excellence. And engineering excellence means understanding our responsibility: we solve problems for users. Code is just how we do it.
The practices that served us for decades continue to serve us now. Made explicit for AI consumption. Enforced through tooling. Amplified by AI capabilities. Fundamentally unchanged.
Professional engineering discipline isn’t a barrier to AI adoption. It’s the foundation that makes it work.
Need help building engineering discipline? Whether you’re an individual engineer looking to strengthen your practice or leading a team implementing these approaches, Wyrd Technology offers coaching, mentoring, and training tailored to your needs. Get in touch to explore how we can support your engineering excellence journey.
About The Author
Tim Huegdon is the founder of Wyrd Technology, a consultancy focused on helping engineering teams achieve operational excellence through strategic AI adoption. With over 25 years of experience in software engineering and technical leadership at companies including Yahoo! Eurosport and Amazon Prime Video. Tim specialises in translating engineering fundamentals into effective AI collaboration strategies. His work focuses on demonstrating how mature engineering practices (discovery, documentation, TDD, governance) scale naturally to AI-assisted development.