How it works: human decisions, AI speed, deterministic delivery
Most data migration programmes don't fail because teams can't write SQL. They fail because decisions are unclear, evidence is inconsistent, and handoffs turn into rework.
Spreadsheets drift. Scripts get patched under pressure. Reconciliations are late and subjective. Cutover becomes a “big bang” of hope.
elfware uses a different operating model: human-led decisioning, AI-assisted optioning, and deterministic generation + validation for production delivery.
1) Decisions first: what “good” looks like
We begin by making the decision surface explicit:
- What data domains are in scope (and what isn’t)
- What the target needs to operate (MVP vs later phases)
- What constitutes acceptance (counts, balances, exceptions)
- What rules apply (effective dating, reference integrity, lifecycle, status logic)
- What evidence will be used for sign-off
This is consulting work, led by people who have delivered migrations and understand the trade-offs. The goal is to remove ambiguity early.
2) AI-assisted optioning (non-deterministic, bounded)
AI is valuable where you want speed and breadth:
- Suggesting mapping candidates and transformation patterns
- Proposing validation checks and edge-case probes
- Highlighting inconsistencies across specs and sources
- Generating “first pass” templates for review
This stage is intentionally non-deterministic: it explores options. But it's bounded by governance: outputs are reviewed, selected, and turned into structured decisions. AI is not the decision authority.
AI boundary: AI never processes customer data; it supports mapping and delivery configuration only. When AI assists with code generation, the output is reviewed, QA'd, and verified in test runs before deployment to any system.
AI governance: what AI does not do
We use AI in one place only: to accelerate mapping recommendations based on source schemas, target schemas, and customer business rules. Consultants either accept or override every recommendation. That is the full extent of AI involvement.
AI does not:
- Touch, access, or process customer data — it operates on metadata and schema definitions only
- Generate production artefacts without review — custom logic code is AI-assisted, but reviewed, QA'd, and verified in test runs before any deployment to development, system test, or cutover systems
- Make decisions — it suggests options; consultants decide
- Run during cutover or rehearsal — the production pipeline is fully deterministic with no AI dependency
- Train on customer data — no customer data is used as training input for any model
- Operate unsupervised — every AI suggestion passes through a mandatory review/approve gate before it can influence a governed spec
The review/approve gate
Between AI-assisted optioning and governed specs sits an explicit gate. Consultants review AI suggestions against source data profiles, target requirements, and customer business rules. Only accepted recommendations flow into the governed spec. Overridden or rejected suggestions are logged but have no effect on delivery.
All mapping decisions taken by consultants are then confirmed with customers and tested through development, system testing, and dress rehearsal / dry run activities before production cutover.
Decision traceability
Every decision is traceable end-to-end: what AI suggested, what the consultant chose (and why), what was generated from that choice, and what evidence the validators produced. This chain is stored, versioned, and available for audit. If AI is removed entirely, the same governed specs still drive the same deterministic output. AI is not on the critical path.
3) Governed specs and templates: the delivery contract
Once decisions are made, they are expressed as governed inputs:
- Mapping definitions (source → target, transformations, notes)
- Orchestration intent (sequence, dependencies, run groups)
- Policies (naming rules, sequencing windows, “must have” fields)
- Evidence requirements (what gets validated, when, and how)
This is the critical boundary: from here onwards, the objective is repeatability, auditability, and controlled change.
4) Deterministic generation: code-printing the delivery
Instead of manually building artefacts, we generate them deterministically from the governed model:
- Extraction/load scripts
- Transformation steps
- Orchestration/run sequencing
- Reconciliation queries and evidence packs
- Run-time wrappers (logging, restart/recovery, traceability)
The same inputs produce the same outputs. When something changes, you update the model and reprint the artefacts, rather than patching scripts ad hoc.
Controlled non-determinism pipeline
The left side is where humans and AI collaborate on decisions. The right side is fully machine-executed and repeatable.
5) Deterministic validation mesh: objective pass/fail evidence
Delivery speed is meaningless without assurance. We use deterministic validators to make quality gates executable:
- Schema/format checks (types, mandatory fields, allowable values)
- Referential integrity and hierarchy checks
- Balance checks (on-hand, commitments, movements)
- Delta controls (what changed since last rehearsal)
- Business rule assertions (lifecycle, effective dates, statuses)
- Traceability (lineage between input decisions and output artefacts)
Validators generate repeatable evidence: pass/fail results and the artefacts to support decisions. This replaces subjective “it looks right” reviews with objective checks.
6) Repeatable rehearsals: programme certainty grows each run
Because everything is generated and validated consistently, rehearsals become a compounding process:
- Each run produces comparable evidence
- Defects are detected earlier, not during cutover
- Fixes are made in the model and reprinted
- Governance becomes easier because evidence is consistent
This is how programmes move from “big bang anxiety” to confident cutover.
Risk controls
Three layers of deterministic validators ensure every record, every field, every relationship is checked before cutover.
Learn moreAdaptors & accelerators
Reusable adaptors accumulate with each engagement, making every project faster than the last.
Learn moreWhat you get (artefacts, not just advice)
A typical engagement produces tangible, re-usable delivery assets:
- Mapping workbooks/specs with structured notes
- Orchestration model and run schedules
- Generated scripts and wrappers for repeatable execution
- Validator packs and reconciliation evidence
- Cutover rehearsal playbooks
- An adaptor library footprint you can reuse
Where it fits
This model works especially well when:
- Multiple systems are changing at once (ERP + planning platforms)
- Auditability matters (financial or regulated environments)
- Time is constrained and rehearsals must be efficient
- Teams rotate and knowledge must be captured in assets, not in people
Next step: If you want to understand your programme's biggest risk drivers, run a short assessment and get a recommended path.
Frequently asked questions
How long does a typical migration take?
Do you replace our existing team?
What happens if decisions change mid-project?
How do you handle custom modifications?
What if issues are found during rehearsal?
Does AI generate production migration code?
Does AI access or process our data?
Who is accountable if AI suggests the wrong mapping?
Can we audit the AI’s involvement?
Will AI reduce engagement quality or replace our SMEs?
Does AI comply with our governance and change control?
What happens if the AI model changes or is unavailable?
Could AI introduce bias into mapping decisions?
Will this lock us into elfware’s tooling?
What’s the real value of AI vs normal automation?
Ready to de-risk your migration?
Same-day response (Mon-Fri)
