Open Methodology · v1.1 · April 2026

System Integrity for
AI-Augmented
Development

Every AI-augmented team produces artefacts that are individually plausible but collectively inconsistent. This framework governs cross-artefact integrity — ensuring code, spec, decisions, and schema stay aligned as models change, sessions end, and teams grow.

GitHub ↗
Built on a real production build
6
Months from architecture to beta
October 2025 – April 2026
4
Documented drift incidents
Root-cause-to-rule traceability
5
Model generations survived
Claude Opus · Sonnet · ChatGPT · Gemini across 3 families
CC
BY 4.0 Licensed
Use it, adapt it, credit the author
The Problem

Your AI-augmented team is
accumulating drift it cannot see.

Every artefact — code, spec, decisions, docs — is authored by different AI sessions at different times. Each looks plausible on its own. None are reconciled with each other.

CD1 · HighHigh
Sealed Function Count Drift
SSOT declared 19 sealed functions. Code had 23. Both authored by different AI sessions. Neither was wrong when written. Nobody noticed the delta for weeks.
→ RECONCILE phase · Pair A sweep · weekly cadence
CD2 · CriticalCritical
Regeneration Loss
SSOT was "updated" by regenerating it. Two sections silently disappeared. The AI didn't know which sections to preserve. Caught only by a hash change.
→ Artefact Update Policy · PL-03 Surgical Edit prompt
CD3 · CriticalCritical
Wrong File Attribution
Bug fix applied to MivaReelPlayer.kt. Bug persisted. Logic lived in ChapterPlayerScreen.kt — a file the AI had never opened in that chat.
→ Source-First Rule · PL-05 Source-First Enforcement
CD4 · HighHigh
Status Drift
V1–V11 vulnerability fixes applied to code. SSOT still showed "Pending." Both correct when written. Mutually inconsistent for weeks before a release review caught it.
→ Authority Chain · Decision Log ↔ SSOT sweep
The Methodology

Four documents.
One complete framework.

Each document serves a distinct reader. A CTO reads the Drift Audit. An architect reads the SDLC. A developer lives in the Master Build Guide. A sceptic reads the Case Study.

Document 01
Architects · Tech Leads
The AI-Native SDLC
A 7-phase lifecycle — Frame → Seal → Spec → Build → Verify → Reconcile → Evolve — designed to be model-agnostic. Only the adapter layer updates when models change.
Document 02
Developers · Engineering Managers
Master Build Guide
Model Selection Matrix. Nine-prompt library (PL-01 to PL-09). Twelve AI failure modes catalogued with countermeasures. Daily, weekly, and per-milestone rituals.
Document 03
Sceptics · Practitioners
Case Study: From Solo to Production
The actual 6-month architecture-to-beta build. Four drift incidents with root-cause-to-rule traceability. Five model transitions documented. What broke, what worked.
Document 04
CTOs · Consulting Practitioners
The Drift Audit
A standalone 2-week diagnostic engagement. Fixed-scope, fixed-fee. Five reconciliation pairs. Severity classification rubric. Full report template and commercial model.
What actually prevents drift

Three layers.
Drift made detectable, bounded, and correctable.

Process alone cannot prevent drift. MiVA's anti-drift strength came from three reinforcing layers working together — each one catching what the layer above missed.

LAYER 1
Process Layer
SDLC Phases · Governance

The 7-phase lifecycle, reconciliation sweeps, authority chain, and verification gates. Governs how work flows and ensures artefacts stay aligned over time.

Frame → Seal → Spec → Build
Verify → Reconcile → Evolve
Weekly drift sweeps · CD-series log
LAYER 2
🔒
Architectural Layer
SEAL v3 · 5-Ring Enforcement

Sealed interfaces enforced by the system, not by human discipline. Four physical enforcement rings make it structurally impossible to modify sealed contracts accidentally.

Ring 1: Self-describing file headers
Ring 2: Physical sealed/ directory
Ring 3: Git pre-commit guard
Ring 4: Build-time hash verification
Ring 5: Runtime integrity check
LAYER 3
🧱
Code Layer
Design Patterns · Structural Resistance

Software engineering patterns that make the code structurally resistant to drift by design. AI-generated code defaults to naive implementations — these patterns override that.

Sealed Interface · Extension Pattern
Command Bus · Atomic Write
Repository · Strategy · Observer
💡
The key insight no other SDLC framework states

Process discipline degrades under deadline pressure. Architectural enforcement does not. Every ring in the SEAL layer catches drift that process rules miss — because it operates at the system level, not the human level. The most reliable anti-drift mechanism is one that makes the wrong action structurally impossible, not merely discouraged.

Compatibility

Not a replacement.
A governance overlay.

This methodology doesn't compete with Spec Kit, SAFe, or DevOps. It addresses the AI-specific risks those frameworks were not designed for.

FrameworkRelationshipType
GitHub Spec Kit / SDDSDD generates specifications. This methodology keeps them aligned with code over time.Complementary
SAFe AgileSDLC phases fit inside sprint cadence. Reconciliation Sweeps map to Inspect & Adapt.Embedded
DevOps / CI-CDAudit findings typically produce new automated checks added directly to the pipeline.Additive
ISO 9001 / 27001 / GxPCD-series Drift Incident Log is analogous to non-conformance reporting. Integrates with QMS.Integrates
Arthur AI · AtlanThose govern runtime agent behaviour. This governs build-time artefact integrity.Adjacent
Enterprise Positioning

Built for scale.
Extended for the AI era.

Frameworks used by organisations like leading enterprise IT delivery organisations deliver at massive scale with strong governance, predictability, and compliance. AI-augmented development introduces a new challenge: multiple independent AI sessions generating artefacts that are never reconciled with each other. This framework adds the missing layer — ensuring system-wide integrity across AI-generated artefacts.

Traditional SDLC assumption
"If each step is correct,
the system will be correct."
Valid for human-authored code with linear authorship
AI era
AI-Native SDLC insight
"Each step can be correct —
and the system can still be wrong."
Asynchronous AI authorship creates system-level inconsistency invisible to standard gates

What enterprise delivery frameworks do brilliantly.

🏗
Scale
100–10,000 engineers coordinated
📋
Governance
Strong approvals, audits, controls
📅
Predictability
Milestones, SLAs, delivery plans
Compliance
ISO, ITIL, SOX, regulatory
🏆
Process maturity
Decades of refinement
The assumption all of these are built on: humans are the primary authors of code, spec, and decisions. In a world of human authorship with tool assistance, this assumption holds. In a world of AI authorship with human oversight, it does not.

Where they encounter a new failure mode.

NEW FAILURE MODE
Spec–Code Drift
AI generates code independently of spec in a different session. Both are internally consistent. Together they diverge. Standard QA doesn't compare them — it tests the code against test cases, not against the spec.
NEW FAILURE MODE
Multi-Session Inconsistency
AI sessions have no shared memory. A decision made in session 1 is unknown to session 12. Status fields drift — correct at write time, wrong at read time. No existing governance framework tracks session-level authorship.
NEW FAILURE MODE
Confident Hallucination at Scale
AI references functions, columns, and APIs that don't exist — with full confidence and correct-looking syntax. Compiles. Passes lint. Fails at runtime. This failure mode scales directly with AI usage — more AI sessions, more hallucination surface area.
NEW FAILURE MODE
Silent System Divergence
Everything looks correct at each individual review point. The system diverges at the artefact boundary — between code and spec, between spec and decisions, between schema and queries. Invisible to Jira, Git, CI/CD, and code review independently.

Capability comparison — where each layer operates.

Capability Enterprise SDLC
(Enterprise SDLC · SAFe · ITIL-aligned)
DevOps / CI-CD AI-Native SDLC
(this framework)
Primary assumption Humans are primary authors of code and artefacts Humans + AI co-author — sessions have no shared memory Designed for co-authorship from the ground up
Spec governance ✔ Strong ✔ Partial ✔ Strong
Code quality gates ✔ Strong ✔ Strong ✔ Strong
Delivery pipelines ✔ Strong ✔ Strong – Neutral
Compliance / audit ✔ Strong – Partial ✔ Integrates
Cross-artefact integrity ✗ Not designed for ✗ Not designed for ✔ Core focus
AI session consistency ✗ Not designed for ✗ Not designed for ✔ Designed for it
Drift detection ✗ Not visible ✗ Not visible ✔ First-class
Hallucination guards ✗ Not designed for ✗ Not designed for ✔ PL-05, PL-06, PL-07
Multi-model governance ✗ Not applicable ✗ Not applicable ✔ Model-agnostic design
Structured reconciliation ⚠ Ad-hoc ⚠ Ad-hoc ✔ Systematic (5 pairs)

How it fits into your existing stack.

Without
Requirements → Design → Build → Test → Deploy
Governance / QA / Audit
? Cross-artefact integrity gap
Scenario: BA writes spec. Dev uses AI → code generated. QA tests → passes. Production → edge case fails. Root cause: spec and code diverged across sessions. Nobody noticed. Standard gates didn't check artefact alignment.
With AI-Native SDLC layer
Requirements → Design → Build → Test → Deploy
Governance / QA / Audit
⟳ Reconciliation Layer (AI-Native SDLC)
Drift detection · Artefact alignment · AI output validation
Same scenario: Reconciliation sweep detects spec–code mismatch before QA. Authority chain resolves conflict. Drift logged. Fix applied in the right place — the source of truth, not the symptom. QA now tests a coherent system.

Integrates with your existing toolchain.

📋
Jira / Linear
Task tracking stays unchanged. Drift incidents become tickets.
🌿
Git / GitHub
Pre-commit hooks enforce sealed contracts. PRs include artefact review.
CI/CD
Build-time hash verification added as a step. Sealed contract tests run in pipeline.
📊
Confluence / Notion
SSOT and Decision Log live here. Becomes structured, not informal.
🤖
AI Assistants
Claude, ChatGPT, Copilot — used with structured prompts from the MBG library.
Positioning statement

"Enterprise delivery frameworks like the established enterprise delivery frameworks optimise for execution correctness at scale. The AI-Native SDLC introduces a single missing capability: ensuring that independently AI-generated artefacts remain aligned, consistent, and verifiable as a system. It does not replace the delivery framework. It is the integrity layer the delivery framework was never designed to include — because AI-mediated authorship didn't exist when those frameworks were built."

Author

Built from a real build.
Not a theory.

Founder · MiVA Education
SS
Suneesh Sidharthan
Solution Architect · AI Platform Engineer
AI-Native SDLC Architect · Founder, MiVA Education
Chalakudy, Kerala, India
Background

Over 20 years across telecom, banking, analytics, and digital platforms as a Solution Architect and AI Platform Engineer. The AI-Native SDLC emerged from building MiVA Education — a bilingual Android + PostgreSQL + web learning platform for Indian Class 10 students — almost entirely solo, across five model generations of AI assistants.

The methodology was not designed in advance. It emerged from what broke — four drift incidents that each revealed a missing process control. The framework is what the build taught us the process should have been from day one.

Available for Drift Audit engagements and methodology consulting. Get in touch if your team is using AI assistants in production and suspects the codebase no longer matches the story you tell about it.

AI Platform EngineeringSolution ArchitectureAndroid / KotlinSupabase / PostgreSQLEdTechDrift Detection
Document 01 · Framework

The AI-Native SDLC Framework

A 7-phase lifecycle that ensures cross-artefact system integrity throughout AI-augmented development — model-agnostic, grounded in four real drift incidents, and designed to complement enterprise delivery frameworks.

Version
v1.1 · April 2026
Phases
7 — Frame to Evolve
Design Principles
P1 – P7
Grounded in
CD1 – CD4 drift incidents
Download
Interactive · Click each phase

The 7-Phase Lifecycle

Each phase has a trigger, an AI role, artefacts produced, and an exit gate. Click a phase tab to explore it in detail.

01
FRAME
02
SEAL
03
SPEC
04
BUILD
05
VERIFY
06
RECONCILE
07
EVOLVE
Conflict Resolution

The Authority Chain

When two artefacts disagree, the left always wins. Declared once, enforced permanently.

Primary · Always wins
DATABASE
The deployed schema is the single ground truth
← WINS
Second
DECISION LOG
Append-only record of every architectural choice
← WINS
Third
SSOT
Single Source of Truth — the current spec document
← WINS
Fourth
CODE
Implementation — must conform to the spec above it
← WINS
Fifth · Always loses
DOCS
README, wikis, onboarding — last to be updated, first to drift
loses
Design Principles

Seven rules that govern
every decision.

P1
Model-Agnostic by Default
No artefact, template, or gate depends on a specific AI model. Models are interchangeable tools; the process is the constant.
P2
Artefacts Over Conversations
Every decision, contract, and assumption lives in a versioned file. Chat history is a side-effect, not a source of truth.
P3
Authority Chain is Explicit
DATABASE > DECISION LOG > SSOT > CODE > DOCS. When artefacts conflict, the left wins. Declared and enforced, not inferred.
P4
Sealed Interfaces, Mutable Implementations
Lock the contract, not the code. New capabilities extend sealed interfaces; they do not modify them.
P5
Verification is Non-Negotiable
Every AI-produced artefact passes through at least one verification gate before entering production or the Decision Log.
P6
One Unit of Work, One Chat
One chapter, one bug class, one module — one isolated chat. Long chats drift. Short focused chats don't.
P7
Rollback is First-Class
Canonical backups, sealed baselines, and named archives exist at every milestone. Undo is a process capability, not a developer superpower.
Architectural Enforcement

SEAL v3 — Five Rings
of Physical Enforcement.

Process rules are enforced by humans. SEAL rings are enforced by the system. The distinction matters — deadline pressure degrades human discipline. It does not degrade a git hook.

R1
Self-Describing File Headers
Human-readable · Zero tooling required

Every file in the sealed/ directory carries a header declaring its sealed status, version, and the authority that sealed it. An AI reading the file encounters the contract declaration before any implementation — making accidental mutation visible even without tooling.

// SEALED CONTRACT — DO NOT MODIFY
// Authority: SSOT v4.7 §E2 · Sealed: 2026-01-14
// Hash: sha256:a3f8c2... · Ring: 1/5
// Modification requires: major version bump + arch review
Prevents: FM-07
Accidental edit
R2
Physical Directory Isolation
Filesystem-level · Visible in IDE + Git

Sealed files live in a dedicated sealed/ directory, physically separated from mutable code. This separation is visible in every IDE, every diff, every PR. A change to sealed/ is immediately obvious — not hidden in a large changeset. New capabilities are added in mutable/ via extension pattern, never by editing sealed files.

app/src/main/java/com/miva/
├── data/
│ ├── sealed/ # IMMUTABLE — contracts only
│ │ ├── SealedApi.kt
│ │ ├── contracts/
│ │ └── model/
│ └── mutable/ # all implementation goes here
│ ├── api/
│ └── network/
Prevents: CD1
Count drift
R3
Git Pre-Commit Guard
Automated · Blocks commit if sealed/ touched

A git pre-commit hook (seal-guard.sh) runs before every commit. It inspects the staged diff for any changes to files under sealed/. If any are found, the commit is rejected with a clear error message citing the specific file and the required authority approval path. The guard cannot be bypassed without --no-verify, which is logged and flagged in the next reconciliation sweep.

✗ SEAL GUARD: Commit rejected
Modified sealed file detected:
sealed/contracts/SealedApi.kt
Sealed files require:
1. Major version bump in SSOT
2. Architecture review board approval
3. Decision Log entry before commit
Use --no-verify ONLY with arch approval logged.
Prevents: FM-10
Silent mutation
R4
Build-Time Hash Verification
CI/CD level · Fails build if hash mismatch

At seal time, a SHA-256 hash is computed for each sealed file and stored in SealedIntegrityCheck.kt. Every build re-computes the hash and compares. Any mismatch — even a single changed character — fails the build immediately. This catches changes that bypassed Ring 3 (e.g. via --no-verify) and changes made directly to files without going through git.

// SealedIntegrityCheck.kt
val SEALED_HASHES = mapOf(
  "SealedApi.kt" to "a3f8c2d...",
  "Models.kt" to "b9e21a4...",
  "GrantContracts.kt" to "c7f3b8e..."
)

// Verified at app startup + build time
fun verifyIntegrity(): SealStatus
Prevents: CD1
FM-10
R5
Runtime Integrity Verification
Production · Detects tampering at runtime

At application startup, SealedContractVerifier.kt re-runs the hash check against the embedded baseline. Any mismatch prevents the app from starting in production mode. In debug mode, it logs a warning with the specific file and delta. This ring is the final catch — it operates entirely at runtime, independent of the build system, catching anything that slipped through rings 1–4.

// Runs at app startup — before any feature
fun verifyAll(): SealStatus {
  return SEALED_HASHES.entries.map { (file, expected) ->
    val actual = sha256(loadFile(file))
    if (actual != expected) SealBreach(file, actual, expected)
    else SealIntact(file)
  }.let { results ->
    if (results.any { it is SealBreach }) INTEGRITY_FAILED
    else INTEGRITY_CONFIRMED
  }
}
Final catch
All rings
The Extension Pattern

Eternal interfaces. Infinite evolution.

Sealed contracts are eternal — never deprecated, never removed. When new capability is needed, it is added via an extension that calls the sealed contract internally. The original contract remains alive, backward-compatible, and unchanged. This is how MiVA survived five model generations without a single sealed interface breaking.

✗ Forbidden — Mutation
// Changing the sealed function
// breaks every caller
fun getUser(id: String): User
fun getUser(id: String,
            withPrefs: Boolean): User
✓ Correct — Extension
// Sealed function untouched
fun getUser(id: String): User

// New capability as extension
fun getUserWithPrefs(id: String) =
  getUser(id).also { loadPrefs(it) }
Why architecture beats process alone

What each layer actually catches.

Drift ScenarioProcess Layer catches?SEAL Architecture catches?How
AI edits sealed file accidentallyNo — needs manual reviewYes — Ring 3 blocks commitGit pre-commit hook
Sealed function count mismatch (CD1)Only if reconciliation runsYes — Ring 4 fails buildHash verification
Contract changed via --no-verifyNoYes — Ring 4 + 5Build + runtime hash
Spec says 19 functions, code has 23Only in Reconcile phaseYes — SealedContractTestTest fails at build
New model changes AI behaviourYes — model transition ritualYes — artefacts unchangedBoth layers
Spec and code diverge over weeksYes — weekly sweepPartial — sealed onlyProcess layer essential here
The Core Mechanism

Continuous Reconciliation
is not a phase. It's a system.

RECONCILE appears as Phase 6 in the lifecycle. But it is actually the mechanism that makes the entire framework work. Drift doesn't accumulate between phases — it accumulates continuously. The reconciliation system responds to that reality.

Three trigger types — continuous coverage.

Event-Triggered
Immediate · Targeted

Fires on specific development events. Checks only the artefact pair directly affected by that event. Fast, low-overhead, catches drift at the point of introduction.

git commit → Pair D (sealed hash)
PR opened → Pair A (code vs SSOT)
Schema migration → Pair C (DB vs code)
AI session closed → Decision Log check
📅
Scheduled Sweep
Comprehensive · Periodic

Runs all five reconciliation pairs on a fixed cadence regardless of events. Catches drift that accumulated between event triggers — the slow drift that no single commit reveals.

Daily: Pair A + Pair D (15 min)
Weekly: All 5 pairs (90 min)
Per milestone: All pairs + zero-delta gate
Per model release: All pairs + Pair E
Threshold Alert
Automatic · Escalating

Fires when drift metrics cross defined thresholds. Escalates severity automatically. Blocks milestone gates if critical thresholds are breached.

DI > 0.1 → Weekly sweep triggered
DI > 0.3 → Drift Audit recommended
DI > 0.6 → Milestone gate blocked
Sealed breach → Build fails immediately

The reconciliation system flow.

🔔
TRIGGER
Event / Schedule / Threshold
🔍
SWEEP
Run target pairs via PL-08
📊
MEASURE
Calculate DI per pair + total
🏷
CLASSIFY
Severity: Critical / High / Med / Low
📝
LOG
Open CD-series incident with root cause
🔧
RESOLVE
Fix per authority chain priority
VERIFY
Re-run affected pair. DI recalculated.

Automation integration hooks.

Git hooks (Rings 3 + 4)
# .git/hooks/pre-commit
#!/bin/bash
./scripts/seal-guard.sh # Ring 3
./scripts/hash-verify.sh # Ring 4
# Exit 1 = blocks commit

# .git/hooks/pre-push
#!/bin/bash
./scripts/pair-a-quick.sh # Code vs SSOT
CI/CD pipeline step
# GitHub Actions / GitLab CI
- name: Drift Reconciliation
  run: |
    ./scripts/drift-sweep.sh --pairs A,C,D
    ./scripts/di-calculate.sh
    if [ $DI_SCORE > 0.6 ]; then
      exit 1 # Block pipeline
    fi
Current status in MiVA: Rings 3 and 4 are implemented (git pre-commit guard + build-time hash). Weekly PL-08 sweeps are manual. CI pipeline integration for Pairs A and C is on the roadmap — the scripts exist, the automation step is the next increment.
Roles & Responsibilities

RACI — AI is a role, not a tool.

AI-Builder and AI-Reviewer are different chat sessions, ideally from different model families.

PhaseHuman OwnerAI BuilderAI ReviewerExternal
FRAMER/ACI
SEALR/ACCC
SPECARCI
BUILDAR
VERIFYARC
RECONCILEARI
EVOLVER/ACCI

R = Responsible · A = Accountable · C = Consulted · I = Informed

What to measure

Metrics that actually matter.

Track these
Drift incidents per sprint
Leading indicator of process decay
≤ 0
Ready → merge time
Long queues = verification too heavy
< 24h
BUILD chats under 60% context
Near-limit chats spike hallucination
> 80%
Sealed contracts violated
Any violation = SEV1
0%
Decision Log entries / week
Too few = ad-hoc; too many = unsealed
3–10
Never track these
Lines of code per day — AI makes this meaningless and harmful to optimise
Number of AI interactions — rewards chattiness over clarity
Speed to first commit — rewards skipping FRAME and SEAL
Token spend in isolation — a cheap hallucinated output costs more than an expensive verified one
Document 02 · Operational

The Master Build Guide

The operational adapter layer. Model selection matrix, nine-prompt library, twelve AI failure modes, daily/weekly rituals. When models change, only this document updates.

Prompts
PL-01 to PL-09
Failure Modes
FM-01 to FM-12
Rituals
Daily · Weekly · Milestone
Download
Model Selection Matrix · April 2026

Right model.
Right task.

Task types are stable. The "Current Pick" column is the only thing that changes when a new model releases.

Task Type
Model Class
Why
Current Pick
Architecture decisions, SSOT drafting, spec review
Highest-reasoning
Mistakes here cost downstream
Opus 4.7
Bulk code generation (>500 lines/file)
Highest-output-window
Truncation = rework
Sonnet 4.6 Extended
Scientific / domain-accurate content
Highest-reasoning
Hallucination unacceptable in education
Opus 4.7
Repetitive module/chapter builds
Balanced
Volume work with proven pattern
Sonnet 4.6
Quick fixes, small refactors, one-file edits
Fast / cheap
Latency and cost matter
Sonnet 4.6
Governance review (second opinion)
Different family
Diverse training = different blind spots
ChatGPT / Gemini
SSOT review · Drift reconciliation sweeps
Highest-reasoning
Must catch subtle drift
Opus 4.7
Code Layer · Anti-Drift by Design

Design Pattern Catalogue.
Structural resistance to drift.

AI-generated code defaults to the simplest implementation — often the one most vulnerable to drift. These eight patterns, drawn from the MiVA build, produce code that is structurally resistant to drift by design. Each maps directly to a specific failure mode.

P1
Sealed Interface Pattern
Prevents: FM-10 · CD1 · Contract mutation
Core pattern ▾
What it is

A public contract (API, schema, function signatures) is declared immutable for the lifetime of a major version. The contract is implemented separately from the declaration — callers bind to the contract, never to the implementation. This means the implementation can change freely without breaking callers, and the contract cannot drift without a deliberate, logged, reviewed version bump.

MiVA instance
SealedApi.kt declared 19 functions — the public contract for all Android feature modules. No feature module imports anything outside SealedApi. When CD1 revealed 23 functions in code vs 19 in spec, the pattern itself was the diagnostic — the count mismatch was immediately measurable.
Implementation
// SEALED — do not modify
interface SealedApi {
  fun getChapters(): Flow<List<Chapter>>
  fun getClips(chapterId: String): Flow<List<Clip>>
  fun trackProgress(clipId: String, pct: Float)
  // ... 16 more sealed functions
}

// MUTABLE — implementation details
class MivaApi : SealedApi {
  override fun getChapters() = ...
}

// Extension — new capability, sealed intact
fun SealedApi.getChaptersWithProgress() =
  getChapters().map { addProgress(it) }
P2
Command Bus / Event Bus Pattern
Prevents: V2 race condition · mutable state drift
Concurrency
What it is

Instead of multiple coroutines writing to shared mutable state, all mutations are expressed as typed commands sent to a single channel. The channel processes commands sequentially, eliminating race conditions by construction. The bus itself is the sealed contract — callers can only send commands, never modify state directly.

MiVA instance — V2 fix
PrefetchEventBus replaced direct mutation of MivaPrefetchManager state. Race condition between multiple coroutines writing to the same prefetch queue eliminated by routing all writes through a single channel.
Implementation
sealed class PrefetchCommand {
  data class Start(val chapterId: String) : PrefetchCommand()
  data class Cancel(val chapterId: String) : PrefetchCommand()
  object ClearAll : PrefetchCommand()
}

// Single channel — sequential processing
val PrefetchEventBus = Channel<PrefetchCommand>(BUFFERED)

// No direct state mutation allowed
suspend fun startPrefetch(id: String) {
  PrefetchEventBus.send(PrefetchCommand.Start(id))
}
P3
Smart Retry with Error Classification
Prevents: V5 · O4 RestException gap · silent failures
Resilience
What it is

AI-generated retry logic typically catches Exception broadly — which swallows CancellationException and retries on errors that should fail immediately (auth failures, data errors). Smart retry classifies errors first: transient errors get retried with exponential backoff, permanent errors fail fast, cancellations propagate immediately.

Open item O4 in MiVA
RetryPolicy.shouldRetry() only handles IOException/SocketTimeoutException. Supabase SDK throws RestException — 502 gateway errors get NO_RETRY, causing instant user-visible failures on transient outages. Pattern shows how to classify SDK-specific exceptions properly.
Implementation
sealed class RetryDecision {
  object Retry : RetryDecision()
  object NoRetry : RetryDecision()
}

fun shouldRetry(e: Throwable) = when {
  e is CancellationException -> throw e // never retry
  e is IOException -> Retry
  e is SocketTimeoutException -> Retry
  e is RestException && e.statusCode in 500..599 -> Retry
  e is RestException && e.statusCode == 401 -> NoRetry
  else -> NoRetry
}
P4
Atomic Write Pattern
Prevents: V9 · partial writes · corrupted downloads
Data integrity
What it is

AI-generated file write code typically writes directly to the target path. If the process is interrupted mid-write (crash, low memory, network loss), the file is left in a partially-written, corrupted state. The atomic write pattern writes to a temporary file first, verifies integrity with fsync, then atomically renames to the target. The target either contains the complete new content or the previous complete content — never a partial state.

MiVA V9 fix
AtomicDownloader replaced direct file writes in the video download pipeline. Previously, a crashed download left corrupted .mp4 files that passed extension checks but failed playback — a silent failure that only surfaced at runtime.
Implementation
object AtomicDownloader {
  suspend fun write(target: File, data: ByteArray) {
    // 1. Write to temp file
    val tmp = File("${"{"}target.path{"}"}.tmp")
    tmp.outputStream().use { out ->
      out.write(data)
      out.fd.sync() // fsync — flush to disk
    }
    // 2. Verify integrity
    check(tmp.length() == data.size.toLong())
    // 3. Atomic rename — never partial
    tmp.renameTo(target)
  }
}
P5
Repository Pattern with Schema Isolation
Prevents: FM-01 · FM-06 · hallucinated column names
DB isolation
What it is

All database access is abstracted behind a Repository interface. The repository is the only layer that knows column names, table structures, and RPC function signatures. Feature code never references column names directly — it calls repository methods. When AI generates a query with a hallucinated column name, the error surfaces at the repository boundary rather than silently returning null data at runtime.

MiVA enforcement rule
Before writing any function touching DB, run control.show_columns('schema','table'). Never assume column names. This is enforced in the SSOT Developer Rules — a process rule made necessary by FM-06.
Implementation
// Repository interface — only schema knowledge lives here
interface ChapterRepository {
  suspend fun getChapters(packSlug: String): Result<List<Chapter>>
  suspend fun getClips(chapterId: String): Result<List<Clip>>
}

// Feature code — no column names, no SQL
class PlayerViewModel(private val repo: ChapterRepository) {
  fun loadChapter(id: String) {
    repo.getClips(id) // hallucinated columns fail here
  }
}
P6
Structured Concurrency + CancellationException Guard
Prevents: V7 · 44 guard instances added in MiVA v41
Coroutines
What it is

AI-generated coroutine code almost universally catches Exception broadly — which silently swallows CancellationException. In Kotlin coroutines, swallowing cancellation breaks structured concurrency: parent scopes don't know the child was cancelled, leading to resource leaks, zombie coroutines, and unpredictable state after navigation events. Every catch block must re-throw CancellationException immediately.

MiVA scale
44 CancellationException guards added across MiVA v41 beta6. The vulnerability affected 27 MivaApi functions + RetryPolicy + GlobalErrorHandler — every single coroutine-based API call in the app.
Implementation
// WRONG — AI default — breaks structured concurrency
try {
  val result = apiCall()
} catch (e: Exception) {
  Result.failure(e) // swallows CancellationException!
}

// CORRECT — V7 pattern
try {
  val result = apiCall()
} catch (e: CancellationException) {
  throw e // NEVER swallow — propagate immediately
} catch (e: Exception) {
  Result.failure(e) // safe to handle other exceptions
}
P7
Strategy Pattern for Capability Branching
Prevents: nested conditionals · device-specific drift
Branching logic
What it is

AI-generated device/capability branching accumulates as nested if/when blocks — hard to test, easy for AI to mis-state in future sessions (FM-11 attribution laundering). Strategy pattern extracts each capability variant into its own class implementing a sealed interface. Selection is done once at startup or at the boundary; all subsequent code is polymorphic and tests against the interface, not the concrete strategy.

MiVA instance
DeviceProfile uses 3-tier strategy (LOW/MEDIUM/HIGH RAM) rather than inline API level checks. G1: low RAM detection, G2: safeBlur() skips API < 31. Each tier is a separate implementation, selected once at startup.
Implementation
sealed interface PlaybackStrategy {
  fun canHardwareDecode(codec: String): Boolean
  fun maxConcurrentPlayers(): Int
  fun blurEnabled(): Boolean
}

class LowMemStrategy : PlaybackStrategy {
  override fun canHardwareDecode(codec: String) = false
  override fun maxConcurrentPlayers() = 1
  override fun blurEnabled() = false
}

// Selected once — used everywhere polymorphically
val strategy = DeviceProfile.selectStrategy()
P8
Optimistic Grant + Background Verification
Prevents: F4 webhook delay · payment UX cliff
Payments / UX
What it is

Webhook-based payment confirmation always has latency — Razorpay webhooks can arrive 10–30 seconds after user payment. AI-generated payment flows that wait synchronously for webhook confirmation create an unacceptable UX cliff where the user sees a spinner after paying. Optimistic grant: immediately grant access on payment success signal from SDK, then verify asynchronously in the background. If verification fails, revoke gracefully with a 1-hour grace period.

MiVA F4 implementation
PaymentManager.kt: optimistic grant on Razorpay SDK callback + background webhook verification. 1-hour grace period prevents false revocation during webhook delays. Subscription status is reconciled with DB on next app launch.
Implementation
suspend fun onPaymentSuccess(orderId: String) {
  // 1. Optimistic grant — immediate UX
  subscriptionCache.grant(userId, expiresIn = 1.hour)
  navigator.goToContent()

  // 2. Background verification — async
  scope.launch {
    val verified = webhookVerifier.await(orderId, timeout = 30.seconds)
    if (verified) {
      subscriptionCache.confirmFromServer(userId)
    } else {
      subscriptionCache.revokeWithGrace(userId, grace = 1.hour)
    }
  }
}
Quick Reference

Pattern → Failure Mode mapping.

PatternFailure Mode PreventedMiVA EvidenceEnterprise Risk if Missing
P1 Sealed InterfaceFM-10 · CD119 vs 23 function count driftBreaking API changes ship silently
P2 Command BusV2 race conditionPrefetchEventBus eliminates mutable state raceData corruption under load
P3 Smart RetryV5 · O4 RestException502s getting NO_RETRY → instant user errorTransient outages appear as hard failures
P4 Atomic WriteV9 partial writeCorrupted video files on interrupted downloadSilent data corruption in file-based systems
P5 RepositoryFM-01 · FM-06show_columns() rule — hallucinated columns fail at repo boundarySilent null returns from hallucinated queries
P6 CE GuardV7 · 44 guardsEvery MivaApi function had swallowed cancellationResource leaks · zombie coroutines in prod
P7 StrategyFM-11 attributionDeviceProfile 3-tier vs inline API level checksCapability branching becomes untestable
P8 Optimistic GrantF4 webhook delayRazorpay webhook latency eliminated from UX pathPayment cliff → user abandonment
Prompt Library · Click to expand

Nine prompts.
Reusable infrastructure.

A prompt is not chat text — it is reusable infrastructure encoding hard-won patterns. Every prompt has a phase, a model class, and a purpose.

AI Failure Modes Catalog

Twelve failure modes.
All named. All countered.

FM-01
Hallucinated API
AI references get_session_by_tag() — no such function exists. Compiles, fails at runtime.
PL-05 Source-First · PL-06 Reality Check
FM-02
Wrong-file Attribution
CD3: fix in MivaReelPlayer.kt. Logic was in ChapterPlayerScreen.kt. Bug persisted.
PL-05 Source-First Enforcement
FM-03
Silent Truncation
Chapter 1 HTML cut off mid-JavaScript. No error thrown. Content simply missing.
Model Matrix §2.1 · length check
FM-04
Regeneration Loss
CD2: SSOT regenerated, two sections silently dropped. Caught by hash change.
PL-03 Surgical Edit · Update Policy
FM-05
Status Drift
CD4: V1–V11 marked "Pending" in SSOT, already fixed in code for weeks.
PL-08 Drift Sweep · Authority Chain
FM-06
Confident Hallucination
AI asserts a DB column exists with no verification — stated as fact, not hypothesis.
PL-05 · control.show_columns()
FM-07
Premature Solution
AI jumps to code before FRAME is locked — solution before problem is stated.
PL-01 FRAME Distillation
FM-08
Context Exhaustion
Long chat starts contradicting earlier outputs — model loses track of prior decisions.
One-chat-per-unit rule · §5 budget
FM-09
Sycophantic Agreement
AI agrees with a wrong premise the user stated confidently.
PL-07 (bias toward finding problems)
FM-10
Plausible-but-Wrong Structure
Code compiles and looks right but violates sealed contract — caught by SEAL hash.
Build-hash verification · SealedContractTest
FM-11
Attribution Laundering
AI cites "the spec" for something that is not in the spec.
PL-07 · require exact quote + line ref
FM-12
Model Capability Drift
A prompt that worked on Opus 4.6 produces worse output on Opus 4.7 — silent regression.
§2.3 throwaway experiment rule
Tool gap Process gap AI hallucination Human oversight
Operational Cadence

Calendar events,
not intentions.

Daily · 15 min
Weekly · 90 min
Per Milestone
Per Model Release
1
Sealed contract check — Did any BUILD chat today touch a sealed contract? If yes, run SealedContractTest before closing.
2
Git commit — All changed files committed with meaningful messages? No "WIP" commits at end of day.
3
Review queue — Did any unit cross "Ready for external review"? Queue the VERIFY chat for tomorrow.
4
Decision Log — Any unknowns flagged today that need Decision Log entries? Log them now.
1
PL-08 Drift Sweep — Run in a dedicated chat with current SSOT + Decision Log. All five reconciliation pairs.
2
Decision Log review — Any entries stale, superseded, or in conflict? Mark superseded; never delete.
3
Drift Incident Log — Any new CD-series incidents this week? Open them formally.
4
Canonical backup — Verify zip + DB dump is current and correctly named.
5
SSOT version check — Does the version number reflect the week's changes? Bump if needed.
1
Full L1 + L2 + L3 VERIFY on all changed surfaces. No Lite mode at milestones.
2
Reconciliation sweep with zero tolerance — every open delta resolved or formally accepted.
3
Named canonical backup — e.g. MiVA_v42_Complete.zip, backup_43.
4
SSOT version bump + Decision Log summary — A milestone without this did not happen.
5
External review via PL-07 — by a second AI family. Builder-family reviews are insufficient for milestones.
1
Read release notes — capture context window, output limit, new tools, deprecated behaviours.
2
One throwaway experiment — from current backlog. See where the new model behaves differently.
3
Update Model Selection Matrix — only the affected slot. Never rebase the whole matrix.
4
Log the transition — add row to §2.2 Model Transition Log with date, from, to, and reason.
5
New failure modes? — add to §4.1 Failure Modes before moving on.
6
SDLC does NOT change — if it feels like it needs updating during a model transition, something deeper is wrong.
Document 03 · Evidence

From Solo to Production

The actual 6-month architecture-to-beta build that produced the methodology. Four drift incidents. Five model transitions. What broke, what worked, and what we'd do differently from day one.

Build window
Oct 2025 – Apr 2026
Team
1 developer + AI assistants
Drift incidents
CD1–CD4, all resolved
Download
The Platform

Three surfaces.
One PostgreSQL backend.

MiVA is an education platform for Indian Class 10 students. The core thesis: 3–4 hours of daily Instagram Reel consumption redirected toward syllabus content in the same format.

📱
Android Application
CBSE Class 10 Science delivered as vertical reels — same swipe, same loop, same voluntary consumption as social media.
⚙️
Admin Console
Browser-based content production with automated subtitle generation via speech-to-text alignment. Built in a single evening.
🌐
MiVA Web (SCERT)
Free, static bilingual (English/Malayalam) interactive learning platform for Kerala SCERT ICT Standard X students.
Codebase
~80+ Kotlin files
Database
40+ tables · 7 schemas
Security
85+ RLS policies
Sealed interfaces
~20 frozen contracts
AI model families
Claude · ChatGPT · Gemini
Build Timeline

The messy reality.
Not the clean SDLC.

The build didn't follow the clean SDLC the framework now prescribes — that mess is precisely what the SDLC is designed to prevent next time.

Oct – Nov 2025
Early video content prompts, experiment visuals, initial content transformation pipeline established.
FRAME phase — "video IS the product" as the core bet
December 2025
Architecture foundations — sealed module concept, device management, B2C pricing, anti-drift-by-design philosophy formalised.
SEAL phase — the first sealed contracts declared
Dec 2025 – Jan 2026
Android build grows. Database migration. RLS across 40+ tables with 85+ policies deployed.
SPEC phase — SSOT born; extension pattern established
February 2026
Content production pipeline for Chapter 1. First NCERT accuracy audit reveals hallucination in chemistry equations.
VERIFY — scientific accuracy requires highest-reasoning model
Feb – Mar 2026
Ship Battle Plan. Feature freeze. Admin console built in a single evening using one-chat-per-unit pattern.
One-chat-per-unit rule discovered organically
March 2026
v40 source validation — first major drift discovered. CD1 formally logged. Authority Chain introduced.
RECONCILE phase — Drift Incident process formalised
March 2026
v41/beta6 — vulnerability fix confirmation drift (CD4). CancellationException guards. Sealed-baseline hash regenerated.
Reconciliation Sweep as first-class process
April 2026
MiVA Web pivot — SCERT ICT Standard X. One-chat-per-chapter with Sonnet Extended. SDLC v1.0 written.
The framework closes on itself
The Evidence Base

Four incidents.
Every rule traces to one.

Click each incident to see what happened, the root cause, and what rule it produced in the SDLC.

Model Transitions

Five transitions.
Artefacts unchanged.

TransitionWhat brokeWhat held
Opus 4.5 → 4.6Prompts tuned for 4.5 were slightly under-specified; more concise outputsAll artefacts — SSOT, Decision Log, sealed contracts — unchanged
Opus 4.6 → 4.7Context packaging needed tightening; 4.7 caught more subtle driftPhase structure; prompts needed minor tuning only
Sonnet 4.5 → 4.6 ExtendedAttachment budgets and output window changed; Model Selection Matrix updatedThe one-chat-per-unit pattern scaled unchanged
Claude ↔ ChatGPTMore explicit role instructions needed; formatting conventions differReview-prompt structure (PL-07) generalised with minor adaptation
Claude ↔ GeminiTool use patterns differ substantiallyThe principle of cross-family review held
Closing Observations

Three things the build
kept proving.

9.1
Verification Is Non-Optional
Every drift incident — CD1 through CD4 — traces to a missing verification step. Not weak; missing. The SDLC makes verification non-optional at phase gates, with "Ready for external review" as the load-bearing trigger.
9.2
Plausibility Is Dangerous
AI produces plausible work at a pace humans can't match. "It looks right" is not evidence of correctness. Viewing the actual source is. Every prompt in the MBG is designed to force this distinction.
9.3
Durable Artefacts Outlast Models
When Opus 4.5 became 4.7, the SSOT did not change. When Claude was replaced by ChatGPT for a review, the sealed contracts did not change. This is the result of deliberately designing artefacts to be model-agnostic.

"The SDLC framework is not the MiVA build. It is what the MiVA build taught us the process should have been from day one. The next project gets to start there."

Document 04 · Service Offering

The Drift Audit

A standalone 2-week diagnostic engagement. Fixed-scope. Fixed-fee. Five reconciliation pairs. Usable without adopting any larger framework.

Duration
10 working days
Auditor effort
~50 hours
Client effort
~8 hours
Pairs
5 (A through E)
Download
10-Day Execution Schedule

10 days.
No wasted time.

Two weeks is the precise point where all five pairs can be examined with enough depth to produce actionable findings without over-scoping.

Week 1Discovery & Investigation
Day 1
Kickoff & Inventory
Access provisioning, artefact inventory, scope confirmed in writing
Day 2
Hypothesis Formation
Spec read-through, "suspicious areas" shortlist
Day 3
Pair A Sweep
Code ↔ SSOT — counts, signatures, locations
Day 4
Pair A Deep + C Begin
Pair A frozen; Pair C (Schema ↔ Code) draft
Day 5
Status Call
Pair C complete; interim memo to client
Week 2Analysis & Reporting
Day 6
Pair B Sweep
Decision Log ↔ SSOT — traceable decisions
Day 7
Pairs D + E
Immutable Contracts + Docs ↔ Reality
Day 8
Root Cause Analysis
RCA across all findings, severity classification
Day 9
Report Drafting
Remediation plan, Prevention Shortlist, exec summary
Day 10
Delivery
Final report + readout call with leadership
The Methodology

Five pairs.
Drift is always between two things.

Click a pair to see exactly what to check during the audit.

Severity Classification

Every finding gets a
non-negotiable severity.

Consistent classification makes the Drift Audit a credible diagnostic. When in doubt, classify up.

Critical
Could cause a production incident, security breach, compliance failure, or silent data loss if shipped.
Code references an unfixed vulnerability the SSOT claims is resolved; auth flow calls a function that doesn't exist.
High
Could cause a visible bug, failed deployment, or incorrect business decision in the next 30 days.
SSOT claims 19 endpoints; code has 23. Schema has a column used by 3 queries that was recently renamed.
Medium
Creates confusion, slows onboarding, or increases risk of future drift without immediate failure.
Decision Log entries superseded but not marked. README setup instructions reference a deprecated package.
Low
Cosmetic, stylistic, or informational inconsistencies with no operational impact.
Function names in docs use camelCase; code uses snake_case. Both work — docs just look inconsistent.
Cumulative rule: Five Medium findings in the same subsystem = one High finding about that subsystem. Novelty is not severity — a boring count mismatch that could ship wrong data outranks an exotic drift pattern that cannot ship.
Commercial Model

Fixed-scope.
Fixed-fee. Always.

Hourly pricing destroys the value proposition. The Drift Audit's selling point is that it is bounded, predictable, and decision-ready in 2 weeks.

Tier 1
Starter
5 working days
One reconciliation pair, one subsystem, up to 5 findings.
Small teams, single-product SaaS, proof-of-value
MOST COMMON
Tier 2
Standard
10 working days
All five pairs, one product/subsystem, unlimited findings.
The default engagement
Tier 3
Extended
15 working days
All five pairs, multiple products/subsystems, interview-heavy.
Larger organisations
Follow-on
Retainer
1 day / month
Monthly Reconciliation Sweep using PL-08 methodology.
Post-audit, high-velocity teams

Starter ≈ 50% of Standard · Extended ≈ 150% of Standard · Retainer ≈ 15% of Standard / month

Qualifying Conversation

When to say no.

These disqualifiers exist to be used. Decline engagements that match them — they produce bad outcomes and damage the methodology's reputation.

Client expects working code changes, not a plan — that is a Remediation Sprint, priced differently
Client expects the audit to exonerate or blame specific individuals
No specification exists and none is planned — nothing to reconcile code against
Client will not grant read access to code, SSOT, and decisions together
Timeline demand is under 5 working days — cannot compress further without loss of rigour
Conversion rule of thumb: An honest audit converts roughly 30–50% of clients into at least one follow-on. A client who takes no follow-on is not a failure. Pushing too hard on follow-on sales cheapens the diagnostic.
Book a qualifying call
30 minutes. No commitment. Find out whether your codebase has a drift problem — and whether a Drift Audit is the right response.
For CTOs · Engineering Directors · Tech Leads

Adopt in 2 Weeks.

A lightweight governance overlay for teams using AI in development. No tool replacement. No org restructure. No slowdown. Start with three rules and expand from there.

Pilot duration
2 weeks
Team disruption
Minimal
Tools required
None new
Entry point
3 rules, then expand
Who this is for

You already have
a drift problem.

If your team uses AI coding assistants and has said any of the following, you are already experiencing drift.

SYMPTOM
"It worked in the last version — I don't know why it broke."
SYMPTOM
"The fix looked right — the bug is still there."
SYMPTOM
"Nobody is sure which document is the source of truth."
SYMPTOM
"The AI said it was fixed. It wasn't."
Why your existing tools miss this: CI/CD checks code against tests. Code reviews check deltas, not absolute state. PR gates check changes, not consistency. None were designed for a world where artefacts are authored by different AI sessions at different times — each plausible on its own, none reconciled with each other.
Evidence from a real build

What drift actually
costs.

4
Critical drift incidents
in a 6-month solo build
0
Caught by CI/CD
all missed by standard gates
5+
Model generations
survived without rework
3
Rules prevent 80%
of observed drift
Start here · Zero resistance

Minimum Viable Mode.
Three rules only.

If your team resists process overhead, start with just these three. They alone catch 80% of observed drift with almost zero friction.

RULE 1 Source-First

Before the AI claims anything about existing code — it must view the actual file. "The logic lives in X" is a hypothesis. Reading the file is a fact.

Before answering, view the file. Quote the exact function. Then propose the change.
RULE 2 Drift Log

Open a simple CD-series log. Every time code and spec disagree — log it with: what drifted, root cause, and how it was fixed. One Markdown file is enough.

CD1 | Date | What drifted | Root cause | Resolution
RULE 3 Weekly Sweep

Every Friday: compare code vs spec on one axis. Takes 15 minutes. Any mismatch found = log it. That's it. No ceremony, no meetings.

Does the spec still match the code? Yes / No / Log it.
2-Week Pilot Plan

Try it before
committing your org.

A structured pilot that proves the methodology works on your codebase — before you ask your team to adopt it fully.

Week 1 — Visibility
1
Open your Drift Log
Create DRIFT_LOG.md. Add your first CD entry — a real inconsistency you already know about.
2
Enforce Source-First for 5 days
Every AI interaction: must read the actual file before proposing. Count how many times it would have guessed wrong.
3
Declare your Authority Chain
Write it down: which artefact wins when two disagree? Pin it in Slack or Notion.
Week 2 — Control
4
Run your first Reconciliation Sweep
Compare code vs spec on Pair A. Count mismatches. Log every one found.
5
Introduce the Review Gate
Every feature: "Ready for external review" must list changed files + verification prompt before merge.
6
Measure and decide
Count drift incidents found. Count prevented. Compare to last sprint. That's your ROI.
Success Metrics

What good
looks like.

Within 2 weeks you should see measurable signals — even on a small team.

MetricWhat you are measuringGood signal
Drift incidents detectedMismatches found between code, spec, and decisions5–20 in week 1 (this is normal — they were always there)
Source-First catchesTimes AI would have guessed wrong without reading file≥ 1 per day on active AI usage
Bug misdiagnosis rateFixes applied to wrong file or wrong assumptionDrops noticeably in week 2
Authority disputesTimes team argued about which version is correctResolved by chain, not debate
"Phantom bug" incidentsBugs that looked fixed but weren'tDecreasing trend week over week
Interactive · Drift Index Calculator

What is your team's Drift Index?

The Drift Index (DI) is a simple, computable measure of artefact consistency in your AI-augmented codebase. Rate each pair honestly — the score tells you where to start.

Your Drift Index
Complete the pairs to see your score

Formula: DI = (total mismatches found) ÷ (pairs checked) · Scale: 0.0 = no drift · 1.0 = complete divergence · Data stays in your browser.

Honest limits

Where this doesn't
work well.

Every framework has boundaries. Knowing them prevents bad adoption decisions.

POOR FIT
Pure greenfield with no spec
Nothing to reconcile against. Build your SSOT first, then adopt the framework.
POOR FIT
Teams not using AI assistants
The framework works, but drift accumulates slower. Lower urgency, lower ROI.
POOR FIT
No ownership culture
If no one owns the SSOT or Decision Log, the framework degrades. Assign owners first.
POOR FIT
Pure infrastructure / DevOps teams
Different artefact types. Adapt the reconciliation pairs — don't use them verbatim.
Drift Index · Interactive Calculator

Measure your
system integrity score.

The Drift Index (DI) is a computable metric for artefact alignment across your codebase. Run a quick reconciliation sweep and enter the numbers below. DI = 0 means full alignment. DI = 1 means everything is out of sync.

Enter your reconciliation counts
DI = total mismatches ÷ total artefact pairs checked
Run each pair for 15 min max. Log every mismatch found.
📊
Enter your counts and click Calculate
DI 0.0 – 0.1
Healthy
Minimal drift. Maintain weekly sweeps. Your artefacts are well-aligned.
DI 0.1 – 0.3
Manageable
Normal for active AI-augmented teams. Address high-severity mismatches first.
DI 0.3 – 0.6
At risk
Significant drift accumulated. Consider a formal Drift Audit before next release.
DI 0.6 – 1.0
Critical
High drift across artefacts. Do not ship. Full Drift Audit recommended before proceeding.

Ready to run a
pilot?

30-minute qualifying call. We establish whether your team has a drift problem, scope a 2-week pilot, and define success criteria together. No commitment.

Start a pilot conversation
⚙ Experimental Tool · AI-Native SDLC Designer

Design Your
Own SDLC.

Customise the AI-Native SDLC for your team. Rename phases, add sub-steps, set owners, adjust gates. Export as Markdown or JSON when you're done.

Based on
AI-Native SDLC v1.1
Export
Markdown · JSON
Status
Experimental · Local only

Your SDLC Canvas

Click any phase to expand and edit. Drag to reorder. Add custom phases with the button below.

Authority Chain

Define your source of truth hierarchy

Drag to reorder. The top item wins all conflicts.

Reconciliation Pairs

What to compare during sweeps

Define what artefact pairs your team will reconcile. Toggle on/off. Add your own.

Preview

Your SDLC Summary

Competitive Positioning · April 2026

How This Compares
to Industry Frameworks.

Enterprise delivery frameworks, SAFe, GitHub Spec Kit — all strong frameworks solving real problems. None of them address what happens to artefact integrity when multiple AI sessions author your codebase. That is the gap.

Frameworks reviewed
7 major
Enterprise delivery
SAFe · ITIL · Large-scale Agile · IDM-style
Global tools
GitHub Spec Kit · AWS Kiro · SAFe
Position
Complementary governance overlay
Before the comparison

This is not a
replacement.

Every framework below is solving a real, important problem. The AI-Native SDLC addresses a specific gap that emerged only after AI coding assistants became mainstream — cross-session artefact drift. Most of these frameworks were designed before that problem existed at scale.

What they solve

Delivery velocity, team coordination, cloud migration, AI tool adoption, automated testing, sprint governance, distributed agile, enterprise transformation.

The gap they share

None address what happens when different AI sessions author different artefacts that slowly diverge from each other — invisibly, plausibly, and at speed.

The relationship

AI-Native SDLC sits as a governance overlay. Teams using enterprise AI delivery platforms, Enterprise AI Platform, or SAFe can adopt it without replacing their existing delivery framework.

Enterprise Delivery

SAFe · ITIL · Large-scale Agile · IDM-style
What they do and what they miss.

India's largest IT firm
Large-scale Agile Delivery
large enterprise IT Services
$29B revenue · 600,000+ employees
Location Independent Agile™ Machine First Delivery™ ignio™ AIOps Jile™ Agile Platform Business 4.0™
What enterprise delivery does well
  • Distributed agile at massive scale — 6,000+ active agile engagements
  • AI for IT operations automation via ignio™ (AIOps, anomaly detection)
  • Machine First Delivery — replacing human steps with AI where safe
  • Enterprise agile transformation advisory and tooling (Jile™)
What it doesn't address
  • Cross-session artefact drift when multiple AI tools author the same codebase
  • Spec-code divergence detection — no mechanism to verify code matches spec
  • Sealed contract enforcement — no architectural anti-mutation mechanism
  • AI hallucination governance at the build-artefact level
How they fit together

enterprise AI delivery platforms governs how teams deliver. AI-Native SDLC governs what gets delivered and whether it's consistent. A Enterprise Agile engagement running LIA can adopt AI-Native SDLC as the artefact integrity layer — Reconciliation Sweeps map naturally to LIA's Sprint Review cadence.

Global AI leader
IDM-style Enterprise SDLC
$18.6B revenue · 300,000+ employees
270,000 AI-certified staff
enterprise AI platforms™ Enterprise AI Platform™ SDLC Agent Framework Nia™ AI Platform
What enterprise delivery does well
  • Two-pronged AI+Cloud architecture (Topaz intelligence + Cobalt foundation)
  • AI upskilling at scale — 270,000 staff AI-certified across three tiers
  • SDLC Agent Framework — multi-agent orchestration for development tasks
  • 300+ cloud-first solution blueprints; compliance baked into delivery
What it doesn't address
  • Artefact integrity across multi-agent sessions — Topaz accelerates but doesn't reconcile
  • Decision traceability — no explicit mechanism linking spec choices to code
  • Model-transition governance — no artefact stability framework when agents change
  • Source-First enforcement — Cobalt doesn't address AI hallucinating code context
How they fit together

Enterprise AI Platform provides the cloud-ready platform; Topaz embeds AI across delivery. AI-Native SDLC adds the governance layer that ensures Topaz outputs (spec, code, docs) stay consistent with each other — which the Topaz+Cobalt stack currently does not provide.

Consulting-led AI
Major IT Services
$11B revenue · 250,000+ employees
Major IT Services AI360™ Holmes™ AI Platform Agentic SDLC Framework
What Major IT Services does well
  • Agentic AI transformation — AI agents replacing human workflow steps
  • Honestly names the risks: context leakage, hallucination, governance complexity
  • AI360 covers the full enterprise transformation lifecycle
What Major IT Services doesn't address
  • Names context leakage as a risk — but provides no structured mitigation framework
  • No artefact reconciliation mechanism across multi-agent sessions
  • No sealed contract pattern for preserving architectural integrity
Noteworthy

Major IT Services's AI360 is the closest of the Indian IT frameworks to acknowledging the governance problem. They name "context leakage" explicitly. AI-Native SDLC is essentially what they need to solve the problem they've correctly identified but not yet addressed.

Engineering-led
Large-scale IT Delivery Tech
$13B revenue · 220,000+ employees
DRYiCE™ AIOps LEAP Agile Framework AI Force™
What Large-scale IT Delivery does well
  • Engineering and R&D services with deep technical delivery
  • DRYiCE AIOps — IT operations automation and self-healing systems
  • AI Force — AI-powered developer productivity toolchain
The same gap

Large-scale IT Delivery's engineering focus is on delivery automation — making the build faster with AI tools. Like Enterprise Agile and IDM-style Framework, there is no published framework for governing the integrity of AI-generated artefacts across sessions. The drift problem is not on their roadmap yet.

Global Tools & Frameworks

SAFe · GitHub Spec Kit · AWS Kiro
Closest competitors — and the gap.

SAFe®
Scaled Agile Framework
Most adopted enterprise agile
Strengths

Battle-tested at enterprise scale. PI Planning, Inspect & Adapt, Value Streams. The most structured large-team agile framework available.

AI-era gap

SAFe was designed for human-authored code. No guidance on governing AI-generated artefacts, cross-session consistency, or model transitions.

Relationship

Reconciliation Sweeps map to SAFe's Inspect & Adapt. PI cadence maps to milestone-level RECONCILE. Can be layered directly on top of SAFe.

GitHub Spec Kit
Spec-Driven Development
Released 2025 · Open source
Strengths

Spec-at-centre approach. Specs drive AI agent task breakdowns, checklists, implementation. Closest to the AI-Native SDLC philosophy in the market. Strong GitHub ecosystem integration.

Key difference

Spec Kit governs forward — spec drives implementation. AI-Native SDLC also governs backward — what happens when implementation drifts from spec after the fact. Spec Kit has no reconciliation mechanism.

Complementary

Use Spec Kit to generate the initial spec. Use AI-Native SDLC to keep it aligned with code over time. These two are natural partners — the strongest combination currently possible.

AWS Kiro
Spec-Driven AI IDE
Announced re:Invent 2025
Strengths

IDE-level spec-driven development. Auto-generates requirements, plans, tasks from a spec file. Hooks into Amazon Q for agent execution. Major IT Services built 4 distributed modules in 20 hours using Kiro.

The gap

Kiro governs the generation of code from spec. It does not govern what happens to the spec after multiple Kiro sessions — the spec itself can drift. No authority chain, no sealed contracts, no reconciliation.

Relationship

Kiro generates. AI-Native SDLC reconciles. Kiro is the BUILD phase tool; AI-Native SDLC is the SEAL + VERIFY + RECONCILE governance layer that keeps Kiro's outputs trustworthy over time.

Head-to-head

The gap, framework
by framework.

Every framework below addresses delivery. Only one addresses what happens to artefact integrity when AI is doing the authoring. That is the column that matters for teams experiencing drift.

Framework Delivery Speed Team Scale AI Tool Adoption Spec-Code Integrity Cross-Session Drift Model Transition Sealed Contracts

✅ Strong · ⚠️ Partial · ❌ Not addressed · These are honest assessments, not marketing claims.

The honest positioning

Not "better than Enterprise Agile or IDM-style Framework."
The missing layer that completes them.

Enterprise Agile Location Independent Agile™ is proven at 6,000+ engagements. enterprise AI platforms+Cobalt has transformed enterprise AI adoption at scale. These are excellent frameworks for what they do. What they don't do — and cannot do without a significant architectural redesign — is govern cross-artefact integrity in multi-session, multi-model AI-augmented development. That is not a criticism. It's a gap in the market that didn't exist until AI coding assistants became mainstream in 2024–2025. The AI-Native SDLC is the governance layer that fills it.