
Data · Forecasting & Planning · elfware
Progressive history builds: a lean, iterative approach to forecasting and planning history delivery
Key Takeaways
- Forecasting and planning platforms like Relex and Board require deep transactional history, not just master data.
- Progressive history builds loop week-by-week, loading only what each week needs, instead of replicating entire warehouses to staging.
- Cross-week datasets are loaded once at first relevance and retained until no longer needed.
- Two modes: iterate fast with 1-2 weeks during development, then append one new week per cycle once mappings stabilise.
- Validation and reconciliation act as hard gates per week, isolating defects early and building a robust audit trail.
History is required. Warehouse replication is optional.
ERP programmes -- Oracle Retail, Oracle Fusion, SAP, Dynamics 365 -- treat history delivery as part of the cutover run. The standard pattern is to stage full history up front: extract the warehouse, transform in bulk, validate as a single batch, and load to target. For ERP master data and recent transactions, this works.
Forecasting and planning platforms break this pattern. Systems like Relex (demand forecasting and replenishment) and Board (planning and simulation) need deep transactional history -- two or more years of sales, stock movements, prices, promotions, and markdowns at granular level. They also need early outputs for model calibration and controlled iteration as mappings evolve.
Bulk staging does not support this. It delays validation, inflates infrastructure, and forces full re-runs when any mapping changes. Progressive history builds address this mismatch directly: a delivery pattern designed for platforms that ingest history incrementally and require iterative validation throughout.
Why ERP-style history delivery breaks down for forecasting platforms
ERP history delivery optimises for completeness at cutover. The goal is a single, validated dataset ready for go-live. Forecasting and planning platforms optimise for something different: incremental ingestion and iterative validation. The model needs to consume history progressively, week by week, to calibrate correctly.
When teams apply the ERP pattern to forecasting history, three problems compound:
- Inflated infrastructure footprint. Staging mirrors the full warehouse volume, even though only a fraction is under active development at any point.
- Delayed validation. No output reaches the target system until the entire history has been processed, which may take weeks or months.
- Increased re-run blast radius. When mappings change -- and they always change during development -- the team must re-process the full history. Work already validated is re-run alongside the fix.
"The staging environment becomes a warehouse in its own right. And every iteration cycle re-processes history that was already correct, burning time and infrastructure cost for no incremental value."
Progressive history builds align delivery with how forecasting and planning platforms ingest time-sliced historical data.
The shift: build history progressively
elfware's progressive history build reverses the assumption. Instead of replicating the warehouse and processing everything at once, the approach loops through history one output week at a time.
Definition
Progressive history build: for each output week, load only the data required for that week. Load cross-week files once at their first point of relevance and retain them until they are no longer relevant. This enables iterating with 1-2 weeks during development and appending one new week per cycle without re-running full history once mappings are stable.
The staging footprint collapses from the full warehouse down to the current week's data plus any retained cross-week datasets. Iteration speed increases because developers can test mapping changes against one or two weeks in minutes rather than the full history in hours or days.
Warehouse Replication
Clone full warehouse to staging
Transform entire history in bulk
Discover defects across full dataset
Fix, re-run entire history, repeat
Staging footprint
100%
of warehouse volume
Progressive History Build
Load only Week W data
Retain cross-week files at first relevance
Transform, validate, reconcile Week W
Publish Week W, advance to W+1
Staging footprint
1-2 weeks
plus retained cross-week files
How it works (the week loop)
The progressive history build follows a disciplined six-step loop for each output week:
Define time slice and relevance rules
Establish which data domains are weekly (loaded and discarded each cycle) and which are cross-week (retained across a defined relevance window — that is, load a dataset into staging when it’s first relevant to a week, and keep it there only as long as it continues to be needed in subsequent weeks). Sales and stock snapshots are typically weekly. Price lists, promotion calendars, and markdown plans span multiple weeks.
Load Week W only
Extract only the source data required for the current output week. Weekly datasets are loaded fresh. Cross-week datasets are loaded at their first point of relevance if not already present in staging.
Retain cross-week datasets
Datasets like price lists persist in staging until replaced by a new version or until they fall outside the relevance window. This avoids redundant extraction and ensures referential consistency across weeks.
Transform (model-driven)
Apply transformation logic to the week's data. Because the scope is constrained to a single week, transform jobs run faster and logs are easier to inspect.
Validate and reconcile as a hard gate
Each week must pass reconciliation checks before the loop advances. Row counts, value totals, and business rules are verified. Failures halt the loop at the defective week, isolating the problem immediately.
Publish Week W and repeat
Validated output is published to the target. The loop advances to Week W+1. Already-published weeks are not re-processed.
Relevance Window Retention Timeline
Example: price history across eight weeks
Consider a typical eight-week build for a Relex implementation. Week W1 loads sales, stock, and price data. Reference data (product hierarchy, store master) is loaded once during the first cycle and retained. The active price list is loaded at W1 and retained across subsequent weeks while still relevant -- if a new price list takes effect at W5, it replaces the previous version automatically.
If transformation logic changes during development -- say a mapping rule for promotional pricing is corrected at W3 -- only the affected weeks (W1 through W3) are re-run. Weeks not yet processed are unaffected. If logic remains stable after W3, weeks W4 through W8 are appended sequentially without any full reload.
The result: eight weeks of validated, published history built incrementally, with a staging footprint that never exceeded two weeks of data plus retained cross-week files.
Two operating modes: iteration vs delivery
The progressive history build supports two distinct modes that map to the natural rhythm of a data delivery project:
Development and mapping refinement
- Run 1-2 weeks only
- Fast feedback on mapping changes
- Discard output after each iteration
- Minimal infrastructure cost
Progressive build through full history
- Append one new week per cycle
- Already-validated weeks are not re-run
- Continuous progress towards cutover
- Full audit trail per published week
In practice, teams start in iteration mode to stabilise mappings and business rules. Once confidence is established, they switch to delivery mode and begin appending weeks. The transition is seamless because the same loop drives both modes, and the only difference is whether the output is retained or discarded.
Enterprise outcomes
Organisations that adopt the progressive history build model consistently report the following outcomes:
Smaller staging footprint
Infrastructure costs drop because staging only holds the current week plus retained cross-week files.
Faster iteration cycles
Developers validate mapping changes against 1-2 weeks in minutes, not the full history in hours.
Cleaner defect isolation
Validation failures pin to a specific week, eliminating the noise of full-history defect reports.
Append-only progress
Each validated week is published and never re-processed, creating linear progress towards cutover.
Stronger audit trail
Per-week reconciliation reports provide granular evidence for governance and sign-off.
Earlier usable outputs
The target system can begin ingesting validated weeks before the full history is complete.
Why it matters for Relex and Board
Relex (forecast and replenishment) and Board (planning and simulation) share a common dependency: they consume deep transactional history across several data domains to calibrate their models. The domains that matter most are:
- Sales history at daily or weekly grain, typically 2-3 years
- Stock positions and movements for availability and wastage analysis
- Price lists with effective dates and version chains
- Markdowns and promotions including calendar overlaps and exception handling
Each of these domains has different temporal characteristics. Sales and stock are inherently weekly: each week's data is independent. Price lists span multiple weeks and change infrequently. Promotion calendars may overlap and require careful sequencing.
"The weekly loop maps naturally to how forecasting and planning platforms consume data. Rather than forcing the target system to ingest and sort a bulk load, you deliver history in the same cadence the model expects to process it."
The progressive history build handles these different temporal patterns through relevance rules. Weekly datasets are loaded fresh each iteration. Cross-week datasets are loaded once and retained. When a new version of a cross-week dataset becomes relevant (for example, a new price list takes effect in Week 14), it replaces the previous version in staging automatically.
This matters practically because it enables early iteration on the most complex mapping domains (promotions and markdowns) without waiting for the full extract to complete. Teams can identify and resolve business rule issues in the first few weeks of data rather than discovering them after processing two years of history.
What it looks like in governance: validation and reconciliation
The hard gate between each week is what makes the progressive build trustworthy at enterprise scale. Every published week carries a reconciliation report that covers row counts, value totals, referential integrity, and business-rule compliance.
In practice, governance teams receive per-week evidence packs rather than monolithic reports across the full date range. This makes sign-off faster and more confident. Below are three views that illustrate how validation, orchestration, and error isolation operate within the loop:



Start small: principles and impact
The progressive history build rests on four core principles:
Week-by-week delivery
Process history one output week at a time. Never load the full warehouse to staging.
Relevance window retention
Load cross-week datasets once at first relevance. Retain only as long as needed.
Append instead of re-run
Validated weeks are published and never re-processed. Progress is linear.
Governance per loop
Every week passes reconciliation hard gates before the loop advances.
Applied consistently, these principles deliver measurable impact: a reduced staging footprint, faster iteration on mapping changes, a smaller re-run blast radius when corrections are needed, and lower cutover risk because the target system receives validated history continuously rather than in a single bulk load.
You don't need to commit to a full programme to test the approach. elfware typically runs a focused discovery: map the data domains, define relevance rules, run 2-4 weeks through the loop against your actual data, and deliver a reconciliation pack that demonstrates the model in practice.
Key Takeaways
- Progressive history builds enable week-by-week data delivery instead of bulk warehouse replication.
- This reduces infrastructure cost, accelerates iteration, and isolates defects earlier.
- It aligns with how forecasting and planning platforms like Relex and Board ingest time-sliced historical data.
- Teams can prove mappings with 1–2 weeks before scaling to full history.
History driving staging complexity?
If you’re delivering forecasting or planning platforms and history is driving staging complexity, we’re happy to compare notes.
Talk to usFound this helpful?
Stay Updated
Subscribe to our newsletter for the latest insights and updates.
