elfware
DATA MIGRATIONSERVICESPARTNERSLABSIMAGINECLIENT STORIESBLOGCONTACT
Back to Blog
  1. Home
  2. Blog
  3. AI-Infused Migration Delivery
Enterprise data migration control room with data pipeline visualisations

Data Migration · Delivery Model · elfware

AI-infused data migration: why controlled non-determinism reduces programme risk

elfware· 14 min read·Share

Key Takeaways

  • Human consulting judgement remains the decision authority. AI accelerates configuration and scales analysis -- it does not replace governance.
  • Deterministic code generation from governed specs eliminates spreadsheet drift and manual scripting defects.
  • A validator mesh (mapping, orchestration, plan) provides objective pass/fail gates rather than subjective assurance.
  • Reusable adaptors compound across programmes, reducing ramp-up time and defect rates with each delivery.
  • This approach is faster and safer than traditional consulting, and safer than unguided AI coding.
Talk to us

Why data migrations fail

Enterprise data migrations carry a reputation for overruns, rework, and late-stage surprises. The root causes are well known to anyone who has managed a large programme:

  • Ambiguity in mapping specifications. Mappings live in spreadsheets that accumulate conflicting versions. Analysts, developers, and testers work from different copies, and discrepancies surface only during integration.
  • Handoff friction between workstreams. Functional analysts define mappings. Technical developers implement them. Testers validate a third interpretation. Each handoff introduces risk.
  • Spreadsheet drift. The mapping workbook is the de facto source of truth, yet it has no enforcement mechanism. Changes are tracked manually -- or not at all -- and the code diverges from the spec.
  • Manual scripting. Migration code is hand-written and hand-maintained. Defect rates track directly with code volume, and knowledge is locked in the heads of the developers who wrote it.
  • Inconsistent quality assurance. Validation varies between workstreams. Some apply rigorous reconciliation; others rely on spot checks and visual inspection. Governance receives subjective assurance rather than objective evidence.

These failure modes are not caused by a lack of effort. They are structural: the delivery model itself creates the conditions for drift, rework, and undetected defects. Addressing them requires a different model, not more effort within the existing one.


The operating model

elfware's delivery model places AI in a specific, bounded role. Human consultants make the decisions. AI accelerates the exploration and configuration that inform those decisions. Everything downstream of the governed specification is deterministic: the same input produces the same output, every time.

The pipeline runs as follows:

Controlled Non-Determinism Pipeline

Human decisions

Scope, rules, priorities

AI-assisted optioning

Mapping suggestions, config drafts

Governed specs

Workbook as source of truth

Deterministic generation

Code-printed artefacts

Deterministic validation

Lint, CI, executable checks

Repeatable runs

Same input = same output

Human decisions

Scope, rules, priorities

AI-assisted optioning

Mapping suggestions, config drafts

Governed specs

Workbook as source of truth

Deterministic generation

Code-printed artefacts

Deterministic validation

Lint, CI, executable checks

Repeatable runs

Same input = same output

Non-deterministic AI sits in the optioning layer only. Everything downstream is deterministic and auditable.

Core principle

Controlled non-determinism

AI is non-deterministic by nature. elfware confines that non-determinism to the optioning layer, where it accelerates analysis and configuration. The moment a human approves a specification, the downstream process -- code generation, validation, execution -- is fully deterministic. This is what makes the model faster and safer than either traditional consulting or unguided AI coding.

The operating model separates concerns cleanly. Consulting judgement owns scope, priorities, and sign-off. AI handles the high-volume, pattern-matching tasks that are slow and error-prone for humans: scanning source schemas, suggesting mapping candidates, generating configuration drafts. Deterministic tooling handles everything that must be repeatable and auditable.


What “deterministic low-code delivery” means in practice

Three practices define how elfware delivers migration artefacts without traditional hand-coding:

Workbook as source of truth

The mapping workbook (or governed specification) is the single authoritative input. It captures business decisions, transformation rules, and validation criteria in a structured format. Every generated artefact traces back to a specific cell or rule in this workbook. There is no parallel codebase to reconcile.

Regeneration-first (avoid patching)

When a mapping changes, elfware regenerates the affected artefacts from the updated spec. This is the opposite of the patch-and-fix cycle common in traditional delivery, where developers locate the relevant code, apply a manual change, and hope nothing else breaks. Regeneration guarantees that the running code always reflects the current spec exactly.

Practice

Deterministic validators

A validator mesh operates across three layers: mapping validators confirm that transformation logic matches the spec; orchestration validators confirm that artefacts execute in the correct sequence and with the correct dependencies; and plan validators confirm that the overall programme plan (volumes, timings, environments) is internally consistent. Each validator produces a binary pass/fail result. There is no ambiguity.

Validator mesh

The three-layer validator mesh runs automatically as part of every build cycle. Failures halt promotion. This means that governance teams receive objective evidence -- not status reports or assurance narratives -- at every gate. A defect detected by a validator in development never reaches a test environment.


Programme risk controls

The delivery model provides four structural risk controls that are absent or manual in traditional approaches:

Auditability

Every generated artefact carries a lineage trace to its source specification. Auditors can walk from any line of migration code back to the business decision that created it.

Traceability

Change history is recorded at the spec level, not the code level. When a mapping changes, the system records who changed it, when, and why. The regenerated code inherits this provenance.

Rollback and preview

Because artefacts are generated from versioned specs, rolling back to a previous version is a regeneration, not a manual code revert. Preview runs execute the regenerated artefacts against sample data before promotion.

Objective pass/fail gates

Validator results are binary. A migration batch either passes all validators or it does not promote. This replaces subjective assurance ('we believe the data is correct') with objective evidence ('all 47 validators passed').

“The difference is between a programme director saying ‘the team assures me the data is correct’ and a programme director saying ‘all validation gates passed and here is the evidence pack.’ The second is a fundamentally different risk posture.”


The adaptor flywheel

Migration programmes typically start from scratch. Each new engagement rebuilds connectors, transformation logic, and validation rules from first principles. elfware's adaptor flywheel changes this economics:

Adaptor Flywheel Loop

1

Customer programme

Deliver migration for a specific ERP/target combination

2

Extract adaptor patterns

Generalise connectors, validators, and templates from delivery

3

Catalogue and harden

Add to reusable library with tests and documentation

4

Next programme inherits

New customer starts with pre-built adaptors, reducing ramp-up

5

Knowledge compounds

Policies, edge cases, and validators accumulate across programmes

Cycle repeats. Each programme accelerates the next.

Compounding effect

Adaptor flywheel

Each customer programme contributes back to the adaptor library. Connectors to common source systems (Oracle RMS, Oracle Fusion, SAP, Dynamics 365) are hardened and reused. Validation rules for common data domains (product hierarchies, pricing, promotions) accumulate. Edge-case handling discovered on one programme benefits all subsequent programmes. The fifth programme using a given adaptor runs materially faster than the first.

This is not a theoretical benefit. It manifests in three measurable ways: reduced ramp-up time (new programmes inherit working adaptors), fewer first-run defects (validators already cover known edge cases), and shorter cycle times (configuration starts from a proven baseline rather than a blank workbook).

How knowledge capture compounds

Beyond adaptors, the flywheel captures three types of reusable knowledge:

  • Decision templates. Pre-populated mapping workbooks for common source-to-target combinations, with validated defaults and documented exceptions.
  • Policies. Organisational rules about how specific data domains should be handled (e.g., promotional pricing precedence, markdown sequencing, product hierarchy flattening).
  • Validators. Executable checks that encode business rules and data quality expectations, ready to deploy on day one of a new programme.

Example scenario: ERP to forecasting platform migration

Consider a retailer migrating transactional history from an Oracle Retail Merchandising System (RMS) to a demand forecasting platform (such as Relex or Board). The programme requires two years of weekly sales, stock, pricing, and promotional data across 2,000 stores and 400,000 SKUs.

Traditional approach

Typical delivery

8-person team, 16-week programme

  • Mappings defined in spreadsheets, manually translated to ETL scripts
  • Full history staged and transformed in bulk
  • Defects discovered in UAT trigger full re-runs
  • 3-4 full re-run cycles before sign-off
  • Reconciliation performed manually per batch

elfware approach

AI-infused delivery

4-person team, 8-week programme

  • AI-assisted mapping optioning populates the governed workbook
  • Human consultants review and approve each mapping decision
  • Migration artefacts generated deterministically from the approved spec
  • Validator mesh runs automatically on every build; defects caught pre-run
  • Progressive history builds deliver validated weeks incrementally
  • Reusable adaptors for Oracle RMS reduce ramp-up by weeks

The key differences are structural, not just faster execution. Decision templates mean analysts start from a known-good baseline rather than a blank spreadsheet. Deterministic generation eliminates the translation gap between spec and code. The validator mesh catches defects before any data moves, not during UAT. And progressive delivery provides governance with per-week evidence packs rather than end-of-programme assurance.


Measurement: how to track delivery model effectiveness

Claims about delivery improvement require measurable evidence. elfware tracks the following KPIs across programmes, and we encourage clients to baseline their current delivery against the same metrics:

KPIWhat it measuresWhy it matters
Defects caught pre-runPercentage of defects detected by validators before any data executionHigher = fewer UAT surprises, lower rework cost
Cycle timeTime from spec approval to validated output for a single mapping batchShorter cycles enable more iterations within the programme window
Onboarding timeDays for a new team member to make their first productive mapping contributionMeasures knowledge transfer effectiveness and tooling accessibility
Rerun reliabilityPercentage of reruns that pass all validators without manual interventionMeasures determinism. Target: 100% (same input = same output)
Audit findingsNumber of audit observations relating to data migration governanceMeasures whether objective evidence is replacing subjective assurance

These KPIs are designed to be measurable from programme data, not self-reported. Validator logs, build timestamps, and onboarding records provide the raw data. We publish aggregated benchmarks (anonymised) to help prospective clients calibrate expectations.


What this is not

Clarity about boundaries is as important as the claims themselves. The following are explicit acknowledgements of what elfware’s model does not claim:

We do not claim that AI is deterministic

Large language models produce variable outputs. elfware uses AI for exploration and optioning -- tasks where variability is useful. The moment a decision is made and a specification is approved, AI exits the pipeline. Code generation, validation, and execution are deterministic processes. The distinction is architectural, not semantic.

We do not claim uniqueness without benchmarks

Other consultancies invest in tooling and automation. elfware's differentiation is the specific combination of governed specs, deterministic generation, and a three-layer validator mesh. We encourage prospective clients to compare delivery models side by side, and we publish KPIs to make that comparison practical.

We do not claim AI replaces consulting judgement

Every mapping decision, every scope boundary, every go/no-go call is made by a human consultant. AI makes those humans faster and better informed, but the decision authority remains with people who understand the business context.

Boundary

Judgement vs automation

The line is clear: humans own decisions; AI accelerates the analysis that informs those decisions; deterministic tooling executes and validates the outcomes. If a task requires business judgement, a human does it. If a task is repetitive, pattern-based, and must be auditable, the tooling does it. There is no grey area in the model.


Frequently asked questions

Does elfware claim that AI itself is deterministic?
How does this differ from standard consulting delivery?
Can we use our existing migration tooling alongside elfware?
What happens if a mapping needs to change mid-programme?
Is this approach specific to Oracle Retail migrations?

Reduce your migration programme risk

Whether you are planning a migration, mid-programme and facing challenges, or looking to benchmark your current delivery model, we are happy to compare notes.

Talk to usSee how it worksRequest a risk assessment

Stay Updated

Subscribe to our newsletter for the latest insights and updates.

On this page

Why migrations failThe operating modelDeterministic low-code deliveryProgramme risk controlsThe adaptor flywheelExample scenarioMeasurementWhat this is notFAQ
Share this article
elfware

Specialist Code Printing and Data Automation for complex ERPs

Platform powered by mojoh — the live knowledge model for enterprise automation.

Services

  • Data Migration
  • Oracle Retail
  • Dynamics 365
  • Data
  • Automation
  • Platform

Company

  • About
  • Labs
  • Client Stories
  • Contact

Resources

  • Blog
  • Support

© 2026 elfware. All rights reserved.