Every AI-augmented team produces artefacts that are individually plausible but collectively inconsistent. This framework governs cross-artefact integrity — ensuring code, spec, decisions, and schema stay aligned as models change, sessions end, and teams grow.
Every artefact — code, spec, decisions, docs — is authored by different AI sessions at different times. Each looks plausible on its own. None are reconciled with each other.
Each document serves a distinct reader. A CTO reads the Drift Audit. An architect reads the SDLC. A developer lives in the Master Build Guide. A sceptic reads the Case Study.
Process alone cannot prevent drift. MiVA's anti-drift strength came from three reinforcing layers working together — each one catching what the layer above missed.
Process discipline degrades under deadline pressure. Architectural enforcement does not. Every ring in the SEAL layer catches drift that process rules miss — because it operates at the system level, not the human level. The most reliable anti-drift mechanism is one that makes the wrong action structurally impossible, not merely discouraged.
This methodology doesn't compete with Spec Kit, SAFe, or DevOps. It addresses the AI-specific risks those frameworks were not designed for.
| Framework | Relationship | Type |
|---|---|---|
| GitHub Spec Kit / SDD | SDD generates specifications. This methodology keeps them aligned with code over time. | Complementary |
| SAFe Agile | SDLC phases fit inside sprint cadence. Reconciliation Sweeps map to Inspect & Adapt. | Embedded |
| DevOps / CI-CD | Audit findings typically produce new automated checks added directly to the pipeline. | Additive |
| ISO 9001 / 27001 / GxP | CD-series Drift Incident Log is analogous to non-conformance reporting. Integrates with QMS. | Integrates |
| Arthur AI · Atlan | Those govern runtime agent behaviour. This governs build-time artefact integrity. | Adjacent |
Frameworks used by organisations like leading enterprise IT delivery organisations deliver at massive scale with strong governance, predictability, and compliance. AI-augmented development introduces a new challenge: multiple independent AI sessions generating artefacts that are never reconciled with each other. This framework adds the missing layer — ensuring system-wide integrity across AI-generated artefacts.
| Capability | Enterprise SDLC (Enterprise SDLC · SAFe · ITIL-aligned) |
DevOps / CI-CD | AI-Native SDLC (this framework) |
|---|---|---|---|
| Primary assumption | Humans are primary authors of code and artefacts | Humans + AI co-author — sessions have no shared memory | Designed for co-authorship from the ground up |
| Spec governance | ✔ Strong | ✔ Partial | ✔ Strong |
| Code quality gates | ✔ Strong | ✔ Strong | ✔ Strong |
| Delivery pipelines | ✔ Strong | ✔ Strong | – Neutral |
| Compliance / audit | ✔ Strong | – Partial | ✔ Integrates |
| Cross-artefact integrity | ✗ Not designed for | ✗ Not designed for | ✔ Core focus |
| AI session consistency | ✗ Not designed for | ✗ Not designed for | ✔ Designed for it |
| Drift detection | ✗ Not visible | ✗ Not visible | ✔ First-class |
| Hallucination guards | ✗ Not designed for | ✗ Not designed for | ✔ PL-05, PL-06, PL-07 |
| Multi-model governance | ✗ Not applicable | ✗ Not applicable | ✔ Model-agnostic design |
| Structured reconciliation | ⚠ Ad-hoc | ⚠ Ad-hoc | ✔ Systematic (5 pairs) |
"Enterprise delivery frameworks like the established enterprise delivery frameworks optimise for execution correctness at scale. The AI-Native SDLC introduces a single missing capability: ensuring that independently AI-generated artefacts remain aligned, consistent, and verifiable as a system. It does not replace the delivery framework. It is the integrity layer the delivery framework was never designed to include — because AI-mediated authorship didn't exist when those frameworks were built."
A 7-phase lifecycle that ensures cross-artefact system integrity throughout AI-augmented development — model-agnostic, grounded in four real drift incidents, and designed to complement enterprise delivery frameworks.
Each phase has a trigger, an AI role, artefacts produced, and an exit gate. Click a phase tab to explore it in detail.
When two artefacts disagree, the left always wins. Declared once, enforced permanently.
Process rules are enforced by humans. SEAL rings are enforced by the system. The distinction matters — deadline pressure degrades human discipline. It does not degrade a git hook.
Every file in the sealed/ directory carries a header declaring its sealed status, version, and the authority that sealed it. An AI reading the file encounters the contract declaration before any implementation — making accidental mutation visible even without tooling.
Sealed files live in a dedicated sealed/ directory, physically separated from mutable code. This separation is visible in every IDE, every diff, every PR. A change to sealed/ is immediately obvious — not hidden in a large changeset. New capabilities are added in mutable/ via extension pattern, never by editing sealed files.
A git pre-commit hook (seal-guard.sh) runs before every commit. It inspects the staged diff for any changes to files under sealed/. If any are found, the commit is rejected with a clear error message citing the specific file and the required authority approval path. The guard cannot be bypassed without --no-verify, which is logged and flagged in the next reconciliation sweep.
At seal time, a SHA-256 hash is computed for each sealed file and stored in SealedIntegrityCheck.kt. Every build re-computes the hash and compares. Any mismatch — even a single changed character — fails the build immediately. This catches changes that bypassed Ring 3 (e.g. via --no-verify) and changes made directly to files without going through git.
At application startup, SealedContractVerifier.kt re-runs the hash check against the embedded baseline. Any mismatch prevents the app from starting in production mode. In debug mode, it logs a warning with the specific file and delta. This ring is the final catch — it operates entirely at runtime, independent of the build system, catching anything that slipped through rings 1–4.
Sealed contracts are eternal — never deprecated, never removed. When new capability is needed, it is added via an extension that calls the sealed contract internally. The original contract remains alive, backward-compatible, and unchanged. This is how MiVA survived five model generations without a single sealed interface breaking.
| Drift Scenario | Process Layer catches? | SEAL Architecture catches? | How |
|---|---|---|---|
| AI edits sealed file accidentally | No — needs manual review | Yes — Ring 3 blocks commit | Git pre-commit hook |
| Sealed function count mismatch (CD1) | Only if reconciliation runs | Yes — Ring 4 fails build | Hash verification |
| Contract changed via --no-verify | No | Yes — Ring 4 + 5 | Build + runtime hash |
| Spec says 19 functions, code has 23 | Only in Reconcile phase | Yes — SealedContractTest | Test fails at build |
| New model changes AI behaviour | Yes — model transition ritual | Yes — artefacts unchanged | Both layers |
| Spec and code diverge over weeks | Yes — weekly sweep | Partial — sealed only | Process layer essential here |
RECONCILE appears as Phase 6 in the lifecycle. But it is actually the mechanism that makes the entire framework work. Drift doesn't accumulate between phases — it accumulates continuously. The reconciliation system responds to that reality.
AI-Builder and AI-Reviewer are different chat sessions, ideally from different model families.
| Phase | Human Owner | AI Builder | AI Reviewer | External |
|---|---|---|---|---|
| FRAME | R/A | C | — | I |
| SEAL | R/A | C | C | C |
| SPEC | A | R | C | I |
| BUILD | A | R | — | — |
| VERIFY | A | — | R | C |
| RECONCILE | A | — | R | I |
| EVOLVE | R/A | C | C | I |
R = Responsible · A = Accountable · C = Consulted · I = Informed
The operational adapter layer. Model selection matrix, nine-prompt library, twelve AI failure modes, daily/weekly rituals. When models change, only this document updates.
Task types are stable. The "Current Pick" column is the only thing that changes when a new model releases.
AI-generated code defaults to the simplest implementation — often the one most vulnerable to drift. These eight patterns, drawn from the MiVA build, produce code that is structurally resistant to drift by design. Each maps directly to a specific failure mode.
| Pattern | Failure Mode Prevented | MiVA Evidence | Enterprise Risk if Missing |
|---|---|---|---|
| P1 Sealed Interface | FM-10 · CD1 | 19 vs 23 function count drift | Breaking API changes ship silently |
| P2 Command Bus | V2 race condition | PrefetchEventBus eliminates mutable state race | Data corruption under load |
| P3 Smart Retry | V5 · O4 RestException | 502s getting NO_RETRY → instant user error | Transient outages appear as hard failures |
| P4 Atomic Write | V9 partial write | Corrupted video files on interrupted download | Silent data corruption in file-based systems |
| P5 Repository | FM-01 · FM-06 | show_columns() rule — hallucinated columns fail at repo boundary | Silent null returns from hallucinated queries |
| P6 CE Guard | V7 · 44 guards | Every MivaApi function had swallowed cancellation | Resource leaks · zombie coroutines in prod |
| P7 Strategy | FM-11 attribution | DeviceProfile 3-tier vs inline API level checks | Capability branching becomes untestable |
| P8 Optimistic Grant | F4 webhook delay | Razorpay webhook latency eliminated from UX path | Payment cliff → user abandonment |
A prompt is not chat text — it is reusable infrastructure encoding hard-won patterns. Every prompt has a phase, a model class, and a purpose.
The actual 6-month architecture-to-beta build that produced the methodology. Four drift incidents. Five model transitions. What broke, what worked, and what we'd do differently from day one.
MiVA is an education platform for Indian Class 10 students. The core thesis: 3–4 hours of daily Instagram Reel consumption redirected toward syllabus content in the same format.
The build didn't follow the clean SDLC the framework now prescribes — that mess is precisely what the SDLC is designed to prevent next time.
Click each incident to see what happened, the root cause, and what rule it produced in the SDLC.
| Transition | What broke | What held |
|---|---|---|
| Opus 4.5 → 4.6 | Prompts tuned for 4.5 were slightly under-specified; more concise outputs | All artefacts — SSOT, Decision Log, sealed contracts — unchanged |
| Opus 4.6 → 4.7 | Context packaging needed tightening; 4.7 caught more subtle drift | Phase structure; prompts needed minor tuning only |
| Sonnet 4.5 → 4.6 Extended | Attachment budgets and output window changed; Model Selection Matrix updated | The one-chat-per-unit pattern scaled unchanged |
| Claude ↔ ChatGPT | More explicit role instructions needed; formatting conventions differ | Review-prompt structure (PL-07) generalised with minor adaptation |
| Claude ↔ Gemini | Tool use patterns differ substantially | The principle of cross-family review held |
"The SDLC framework is not the MiVA build. It is what the MiVA build taught us the process should have been from day one. The next project gets to start there."
A standalone 2-week diagnostic engagement. Fixed-scope. Fixed-fee. Five reconciliation pairs. Usable without adopting any larger framework.
Two weeks is the precise point where all five pairs can be examined with enough depth to produce actionable findings without over-scoping.
Click a pair to see exactly what to check during the audit.
Consistent classification makes the Drift Audit a credible diagnostic. When in doubt, classify up.
Hourly pricing destroys the value proposition. The Drift Audit's selling point is that it is bounded, predictable, and decision-ready in 2 weeks.
Starter ≈ 50% of Standard · Extended ≈ 150% of Standard · Retainer ≈ 15% of Standard / month
These disqualifiers exist to be used. Decline engagements that match them — they produce bad outcomes and damage the methodology's reputation.
A lightweight governance overlay for teams using AI in development. No tool replacement. No org restructure. No slowdown. Start with three rules and expand from there.
If your team uses AI coding assistants and has said any of the following, you are already experiencing drift.
If your team resists process overhead, start with just these three. They alone catch 80% of observed drift with almost zero friction.
A structured pilot that proves the methodology works on your codebase — before you ask your team to adopt it fully.
Within 2 weeks you should see measurable signals — even on a small team.
| Metric | What you are measuring | Good signal |
|---|---|---|
| Drift incidents detected | Mismatches found between code, spec, and decisions | 5–20 in week 1 (this is normal — they were always there) |
| Source-First catches | Times AI would have guessed wrong without reading file | ≥ 1 per day on active AI usage |
| Bug misdiagnosis rate | Fixes applied to wrong file or wrong assumption | Drops noticeably in week 2 |
| Authority disputes | Times team argued about which version is correct | Resolved by chain, not debate |
| "Phantom bug" incidents | Bugs that looked fixed but weren't | Decreasing trend week over week |
The Drift Index (DI) is a simple, computable measure of artefact consistency in your AI-augmented codebase. Rate each pair honestly — the score tells you where to start.
Formula: DI = (total mismatches found) ÷ (pairs checked) · Scale: 0.0 = no drift · 1.0 = complete divergence · Data stays in your browser.
Every framework has boundaries. Knowing them prevents bad adoption decisions.
The Drift Index (DI) is a computable metric for artefact alignment across your codebase. Run a quick reconciliation sweep and enter the numbers below. DI = 0 means full alignment. DI = 1 means everything is out of sync.
30-minute qualifying call. We establish whether your team has a drift problem, scope a 2-week pilot, and define success criteria together. No commitment.
Customise the AI-Native SDLC for your team. Rename phases, add sub-steps, set owners, adjust gates. Export as Markdown or JSON when you're done.
Click any phase to expand and edit. Drag to reorder. Add custom phases with the button below.
Drag to reorder. The top item wins all conflicts.
Define what artefact pairs your team will reconcile. Toggle on/off. Add your own.
Enterprise delivery frameworks, SAFe, GitHub Spec Kit — all strong frameworks solving real problems. None of them address what happens to artefact integrity when multiple AI sessions author your codebase. That is the gap.
Every framework below is solving a real, important problem. The AI-Native SDLC addresses a specific gap that emerged only after AI coding assistants became mainstream — cross-session artefact drift. Most of these frameworks were designed before that problem existed at scale.
Delivery velocity, team coordination, cloud migration, AI tool adoption, automated testing, sprint governance, distributed agile, enterprise transformation.
None address what happens when different AI sessions author different artefacts that slowly diverge from each other — invisibly, plausibly, and at speed.
AI-Native SDLC sits as a governance overlay. Teams using enterprise AI delivery platforms, Enterprise AI Platform, or SAFe can adopt it without replacing their existing delivery framework.
Every framework below addresses delivery. Only one addresses what happens to artefact integrity when AI is doing the authoring. That is the column that matters for teams experiencing drift.
✅ Strong · ⚠️ Partial · ❌ Not addressed · These are honest assessments, not marketing claims.
Enterprise Agile Location Independent Agile™ is proven at 6,000+ engagements. enterprise AI platforms+Cobalt has transformed enterprise AI adoption at scale. These are excellent frameworks for what they do. What they don't do — and cannot do without a significant architectural redesign — is govern cross-artefact integrity in multi-session, multi-model AI-augmented development. That is not a criticism. It's a gap in the market that didn't exist until AI coding assistants became mainstream in 2024–2025. The AI-Native SDLC is the governance layer that fills it.