How it works: human decisions, AI speed, deterministic delivery

Most data migration programmes don't fail because teams can't write SQL. They fail because decisions are unclear, evidence is inconsistent, and handoffs turn into rework.

Spreadsheets drift. Scripts get patched under pressure. Reconciliations are late and subjective. Cutover becomes a “big bang” of hope.

elfware uses a different operating model: human-led decisioning, AI-assisted optioning, and deterministic generation + validation for production delivery.

1) Decisions first: what “good” looks like

We begin by making the decision surface explicit:

  • What data domains are in scope (and what isn’t)
  • What the target needs to operate (MVP vs later phases)
  • What constitutes acceptance (counts, balances, exceptions)
  • What rules apply (effective dating, reference integrity, lifecycle, status logic)
  • What evidence will be used for sign-off

This is consulting work, led by people who have delivered migrations and understand the trade-offs. The goal is to remove ambiguity early.

2) AI-assisted optioning (non-deterministic, bounded)

AI is valuable where you want speed and breadth:

  • Suggesting mapping candidates and transformation patterns
  • Proposing validation checks and edge-case probes
  • Highlighting inconsistencies across specs and sources
  • Generating “first pass” templates for review

This stage is intentionally non-deterministic: it explores options. But it's bounded by governance: outputs are reviewed, selected, and turned into structured decisions. AI is not the decision authority.

AI boundary: AI never processes customer data; it supports mapping and delivery configuration only. When AI assists with code generation, the output is reviewed, QA'd, and verified in test runs before deployment to any system.

AI governance: what AI does not do

We use AI in one place only: to accelerate mapping recommendations based on source schemas, target schemas, and customer business rules. Consultants either accept or override every recommendation. That is the full extent of AI involvement.

AI does not:

  • Touch, access, or process customer data — it operates on metadata and schema definitions only
  • Generate production artefacts without review — custom logic code is AI-assisted, but reviewed, QA'd, and verified in test runs before any deployment to development, system test, or cutover systems
  • Make decisions — it suggests options; consultants decide
  • Run during cutover or rehearsal — the production pipeline is fully deterministic with no AI dependency
  • Train on customer data — no customer data is used as training input for any model
  • Operate unsupervised — every AI suggestion passes through a mandatory review/approve gate before it can influence a governed spec

The review/approve gate

Between AI-assisted optioning and governed specs sits an explicit gate. Consultants review AI suggestions against source data profiles, target requirements, and customer business rules. Only accepted recommendations flow into the governed spec. Overridden or rejected suggestions are logged but have no effect on delivery.

All mapping decisions taken by consultants are then confirmed with customers and tested through development, system testing, and dress rehearsal / dry run activities before production cutover.

Decision traceability

Every decision is traceable end-to-end: what AI suggested, what the consultant chose (and why), what was generated from that choice, and what evidence the validators produced. This chain is stored, versioned, and available for audit. If AI is removed entirely, the same governed specs still drive the same deterministic output. AI is not on the critical path.

3) Governed specs and templates: the delivery contract

Once decisions are made, they are expressed as governed inputs:

  • Mapping definitions (source → target, transformations, notes)
  • Orchestration intent (sequence, dependencies, run groups)
  • Policies (naming rules, sequencing windows, “must have” fields)
  • Evidence requirements (what gets validated, when, and how)

This is the critical boundary: from here onwards, the objective is repeatability, auditability, and controlled change.

4) Deterministic generation: code-printing the delivery

Instead of manually building artefacts, we generate them deterministically from the governed model:

  • Extraction/load scripts
  • Transformation steps
  • Orchestration/run sequencing
  • Reconciliation queries and evidence packs
  • Run-time wrappers (logging, restart/recovery, traceability)

The same inputs produce the same outputs. When something changes, you update the model and reprint the artefacts, rather than patching scripts ad hoc.

Controlled non-determinism pipeline

The left side is where humans and AI collaborate on decisions. The right side is fully machine-executed and repeatable.

Controlled non-determinism: human decisions + AI speed, deterministic deliveryNON-DETERMINISTIC (Human + AI)Human decisionauthorityscope, rules, criteriaAI-assistedoptioningGoverned specs & policies(review, select, lock inputs)DETERMINISTIC (Machine-executed)Deterministicgeneration(code-printed artefacts)objective evidenceDeterministicvalidators(pass/fail evidence)Audit-ready rehearsal(repeatable runs)Cutover execution(same chain, higher certainty)AI accelerates exploration; deterministic artefacts and validators make delivery repeatable, auditable, and safer.AI assists configuration only. No customer data.

5) Deterministic validation mesh: objective pass/fail evidence

Delivery speed is meaningless without assurance. We use deterministic validators to make quality gates executable:

  • Schema/format checks (types, mandatory fields, allowable values)
  • Referential integrity and hierarchy checks
  • Balance checks (on-hand, commitments, movements)
  • Delta controls (what changed since last rehearsal)
  • Business rule assertions (lifecycle, effective dates, statuses)
  • Traceability (lineage between input decisions and output artefacts)

Validators generate repeatable evidence: pass/fail results and the artefacts to support decisions. This replaces subjective “it looks right” reviews with objective checks.

6) Repeatable rehearsals: programme certainty grows each run

Because everything is generated and validated consistently, rehearsals become a compounding process:

  • Each run produces comparable evidence
  • Defects are detected earlier, not during cutover
  • Fixes are made in the model and reprinted
  • Governance becomes easier because evidence is consistent

This is how programmes move from “big bang anxiety” to confident cutover.

Risk controls

Three layers of deterministic validators ensure every record, every field, every relationship is checked before cutover.

Learn more

Adaptors & accelerators

Reusable adaptors accumulate with each engagement, making every project faster than the last.

Learn more

What you get (artefacts, not just advice)

A typical engagement produces tangible, re-usable delivery assets:

  • Mapping workbooks/specs with structured notes
  • Orchestration model and run schedules
  • Generated scripts and wrappers for repeatable execution
  • Validator packs and reconciliation evidence
  • Cutover rehearsal playbooks
  • An adaptor library footprint you can reuse

Where it fits

This model works especially well when:

  • Multiple systems are changing at once (ERP + planning platforms)
  • Auditability matters (financial or regulated environments)
  • Time is constrained and rehearsals must be efficient
  • Teams rotate and knowledge must be captured in assets, not in people

Next step: If you want to understand your programme's biggest risk drivers, run a short assessment and get a recommended path.

Frequently asked questions

How long does a typical migration take?
It depends on system complexity and data volume. Our prototype phase (2–4 weeks) provides accurate estimates, and we stay ahead of project delivery timelines to insulate transition and cutover.
Do you replace our existing team?
No. We augment your team with specialist migration expertise. Your team maintains business knowledge and system ownership; we provide technical acceleration and risk mitigation.
What happens if decisions change mid-project?
Governed specs are version-controlled. When a decision changes, the new version flows through the same deterministic pipeline. Old artefacts are replaced, and the assurance mesh re-validates everything.
How do you handle custom modifications?
We map all customisations during the Decide phase. Our adaptors are tuned to your specific configuration, ensuring custom logic is preserved and validated.
What if issues are found during rehearsal?
That is exactly why we rehearse. Issues caught in rehearsal feed back into the Decide phase, the specs are updated, and the deterministic pipeline regenerates corrected artefacts.
Does AI generate production migration code?
No. AI is used only to suggest mapping candidates and transformation patterns. Consultants review and accept or override every suggestion. Once a mapping decision is locked, all production artefacts — scripts, load files, orchestration, evidence packs — are deterministically generated with no AI involvement.
Does AI access or process our data?
No. AI operates on metadata and schema definitions only — source structures, target structures, and business rules. It never sees, touches, or processes actual customer data. No customer data is used as training input for any model.
Who is accountable if AI suggests the wrong mapping?
The consultant. Every AI suggestion passes through a mandatory review/approve gate. The consultant evaluates it against source data profiles, target requirements, and customer rules before accepting or overriding. All decisions are then confirmed with customers and tested through development, system test, and dress rehearsal before production cutover.
Can we audit the AI’s involvement?
Yes. Every decision is fully traceable: what AI suggested, what the consultant chose (and why), what was generated from that choice, and what evidence the validators produced. This lineage is stored, versioned, and available for audit at any point.
Will AI reduce engagement quality or replace our SMEs?
No. AI accelerates the grunt work of initial mapping analysis so consultants can focus on exceptions, edge cases, and business logic. Your SMEs remain central to scope definition, rules, acceptance criteria, and sign-off. AI handles breadth; people handle judgement.
Does AI comply with our governance and change control?
AI suggestions are treated the same as any consultant recommendation — they enter the governed spec only after explicit approval and are subject to the same versioning, change tickets, and promotion processes as every other artefact.
What happens if the AI model changes or is unavailable?
AI is not on the critical path. If the AI layer changes or is removed entirely, the same governed specs still drive the same deterministic output. AI accelerates the early optioning phase; it does not gate production delivery.
Could AI introduce bias into mapping decisions?
AI may suggest patterns based on prior schemas, which is why every suggestion is reviewed by a consultant against actual source data and customer rules. Deterministic validators then independently verify correctness — catching any mis-mapping regardless of how it was proposed.
Will this lock us into elfware’s tooling?
No. Governed specs, generated artefacts, and evidence packs are all transparent, exportable, and understandable without elfware tooling. The decision logs, mapping workbooks, and generated code are yours. We build delivery assets, not vendor dependency.
What’s the real value of AI vs normal automation?
Speed in the optioning phase: AI surfaces mapping candidates and highlights inconsistencies faster than manual analysis. This reduces the time from source analysis to locked decisions. Everything downstream — generation, validation, rehearsal — is deterministic automation, not AI.

Ready to de-risk your migration?

Same-day response (Mon-Fri)