Skip to content
D-LIT Logo

Techniques & Models

Demand forecasting, route optimization, and inventory modeling.

By D-LIT Team

The supply chain analytics techniques described in this article represent the difference between a reporting function and a decision-making function. Reporting tells you what happened. The techniques here tell you what will happen, what should happen, and in some cases, execute the right decision automatically.

This is the most technical article in the supply chain analytics section. It is written for supply chain leaders who need to evaluate, commission, or sponsor analytical capabilities - not to implement them independently. The goal is to give you sufficient depth to make informed build-versus-buy decisions, assess vendor claims, and lead analytically sophisticated conversations with your team and external partners.

For the metrics these techniques optimize, see Supply Chain KPIs. For the data sources these techniques require, see Data Sources. For how to present the outputs to stakeholders, see Dashboards and Reporting.


Demand Forecasting

Demand forecasting is the analytical foundation of the supply chain. Every inventory decision, procurement plan, and capacity allocation depends on a view of future demand. Forecast error is not a technical problem - it is a cost driver that propagates through every downstream supply chain decision.

Why forecast accuracy matters financially:

A 10 percentage point improvement in forecast accuracy (from 70% to 80% forecast accuracy, i.e., MAPE declining from 30% to 20%) typically enables a 15-20% reduction in safety stock without degrading service levels. For an organization carrying $50M in inventory, that is $7.5-10M in working capital released. Simultaneously, better forecasts reduce expediting costs, premium freight, and stockout-driven lost sales.

Statistical forecasting methods:

Simple and weighted moving averages are appropriate for stable, low-variability demand patterns. They smooth historical data but lag when demand trends change. They have no predictive capability for seasonal patterns.

Exponential smoothing (Holt-Winters) extends moving averages to incorporate trend and seasonality components. The Triple Exponential Smoothing model decomposes demand into level, trend, and seasonal components, each with its own smoothing parameter. It is fast, interpretable, and robust for the majority of retail and manufacturing SKUs.

ARIMA (AutoRegressive Integrated Moving Average) models capture autocorrelation in time series data - the pattern where demand this period is partially explained by demand in prior periods. ARIMA requires stationary time series (achieved through differencing) and is more computationally intensive than exponential smoothing, but captures complex autocorrelation patterns that simpler methods miss.

Machine learning forecasting methods:

Gradient boosting models (XGBoost, LightGBM) treat demand forecasting as a regression problem with engineered features: lagged demand values, day-of-week indicators, promotional flags, price variables, and external covariates such as weather or economic indicators. These models typically outperform classical statistical methods when feature engineering is done rigorously and training data is sufficient (generally 2+ years of history).

LSTM (Long Short-Term Memory) neural networks are recurrent architectures designed to capture long-range temporal dependencies in sequences. They are most advantageous for demand patterns with complex, multi-frequency seasonality or when external sequence data (financial market signals, search trend data) is incorporated.

Probabilistic forecasting generates a full distribution of demand outcomes rather than a point estimate. Instead of “we forecast 450 units,” a probabilistic model outputs “50th percentile: 450 units, 85th percentile: 520 units, 95th percentile: 590 units.” This output directly feeds safety stock calculations, enabling explicit service level targeting based on the actual demand distribution rather than assumed normality.

Forecast accuracy measurement:

Mean Absolute Percentage Error (MAPE) = (1/n) × Σ |Actual - Forecast| / Actual × 100
Weighted MAPE (WMAPE) = Σ |Actual - Forecast| / Σ Actual × 100
Forecast Bias = (Forecast - Actual) / Actual × 100

MAPE is the most commonly reported metric but is undefined for zero actual demand and overweights low-volume SKUs. WMAPE is preferable for heterogeneous SKU portfolios because it weights by revenue or volume. Track bias alongside MAPE - a model with low MAPE but systematic positive bias will consistently over-produce inventory.

Hierarchy reconciliation: For organizations with multi-level demand hierarchies (national → regional → store → SKU), top-down and bottom-up forecasts frequently conflict. Middle-out reconciliation - forecasting at the natural demand aggregation level and disaggregating to lower levels using historical proportions - typically produces the best balance of statistical efficiency and operational usability.


Inventory Optimization

Inventory optimization determines the right amount of stock to hold at each location for each SKU, balancing service level targets against working capital costs. It is the direct application of demand forecasting outputs to replenishment policy decisions.

Economic Order Quantity (EOQ):

EOQ calculates the order quantity that minimizes the combined cost of ordering (fixed cost per purchase order) and holding (carrying cost per unit per year).

EOQ = √(2 × D × S / H)

Where:

  • D = Annual demand in units
  • S = Order setup or purchase order cost per order
  • H = Annual holding cost per unit (typically 20-30% of unit cost, capturing capital, storage, obsolescence, and insurance)

EOQ tells you how much to order at a time; it does not tell you when to order. The reorder point model addresses timing.

Reorder Point:

Reorder Point (ROP) = (Average Daily Demand × Lead Time in Days) + Safety Stock

This triggers a replenishment order when inventory falls to the reorder point, ensuring stock arrives before inventory depletes.

Safety Stock:

Safety stock buffers against two sources of uncertainty: demand variability and lead time variability. The full formula incorporating both:

Safety Stock = Z × √(LT × σ²_D + D̄² × σ²_LT)

Where:

  • Z = Service level Z-score (e.g., 1.65 for 95% service level, 2.05 for 98%, 2.33 for 99%)
  • LT = Average lead time in days
  • σ_D = Standard deviation of daily demand
  • D̄ = Average daily demand
  • σ_LT = Standard deviation of lead time in days

The simplified version (assuming constant lead time) is:

Safety Stock = Z × σ_D × √LT

Use the full formula when lead time variability from suppliers is significant - which it typically is, and which is why organizations that ignore lead time variance systematically understock.

ABC-XYZ Classification:

ABC classifies SKUs by revenue or volume contribution (A: top 80%, B: next 15%, C: remaining 5%). XYZ classifies by demand variability (X: coefficient of variation < 0.5, Y: 0.5-1.0, Z: > 1.0). The 9-cell matrix that results guides policy differentiation:

  • AX SKUs (high value, stable demand): precise optimization, tight safety stock, continuous replenishment
  • AZ SKUs (high value, volatile demand): higher safety stock multiples, shorter review cycles, collaborative forecasting with customers
  • CZ SKUs (low value, volatile demand): candidates for SKU rationalization or make-to-order rather than make-to-stock

Multi-echelon inventory optimization:

Single-location inventory models ignore the network. Multi-echelon optimization simultaneously determines inventory targets across all nodes in the distribution network - factory, regional DC, forward DC, retail location - accounting for replenishment lead times between nodes and the risk-pooling benefit of centralized versus decentralized inventory. The business case is typically a 10-20% inventory reduction without service level degradation, achieved by repositioning buffer stock at the network level rather than duplicating it at every node.


Network Optimization

Network optimization addresses the structural supply chain question: where should inventory be held, where should distribution centers be located, and how should product flow between suppliers, facilities, and customers to minimize total landed cost while achieving service level requirements?

Facility location modeling:

Facility location models minimize total cost (facility fixed costs + transportation costs + inventory costs) subject to service level constraints (maximum delivery time or distance to each customer zone). The mathematical formulation is a mixed-integer programming (MIP) problem:

Minimize: Σ(i) f_i × y_i + Σ(i,j) c_ij × x_ij
Subject to: Σ(i) x_ij = d_j for all customers j
            x_ij ≤ M × y_i for all i, j
            y_i ∈ {0,1}

Where f_i is the fixed cost of opening facility i, y_i is a binary open/close decision, c_ij is the per-unit cost to serve customer j from facility i, x_ij is flow quantity, and d_j is customer demand.

Solvers like Gurobi, CPLEX, or open-source alternatives handle real-world network sizes (hundreds of potential facility locations, thousands of customer demand points) in minutes to hours depending on network complexity.

Transportation lane optimization:

Within a defined network structure, lane optimization determines the optimal routing of product flows across the network. This includes carrier and mode selection by lane, consolidation opportunities, and cross-docking versus direct-delivery trade-offs.

Scenario analysis: Network optimization models are most valuable as scenario analysis tools. Rather than seeking the single optimal solution, sophisticated teams model multiple scenarios: current network, optimized current network, one additional DC, two additional DCs, different geographic coverage assumptions, different service level requirements. Each scenario produces a cost and service level profile, enabling leadership to make informed trade-offs between capital investment and operational performance.


Supplier Performance Analytics

Supplier performance analytics transforms supplier relationship management from a qualitative, relationship-driven process into a data-driven accountability system. It creates the factual foundation for supplier negotiations, sourcing decisions, and supply chain risk management.

The supplier scorecard:

A supplier scorecard aggregates performance across multiple dimensions into a composite score that enables comparison and trend tracking. Common dimensions:

DimensionMetricsWeight (example)
Delivery performanceOn-time delivery rate, ASN accuracy30%
QualityDefect rate, return rate, quality holds25%
CostPrice competitiveness, invoice accuracy, charge-back rate20%
ResponsivenessLead time compliance, exception response time15%
SustainabilityCarbon reporting compliance, audit scores10%

Weights should reflect the business’s strategic priorities and can be differentiated by supplier category (critical components versus commodity packaging).

Lead time reliability analysis:

Standard supplier scorecards report average lead time. Advanced analytics add lead time distribution analysis:

Lead Time Reliability Index = % of Orders Delivered Within ±10% of Committed Lead Time
Lead Time Variability = Standard Deviation of Lead Time / Average Lead Time (Coefficient of Variation)

A supplier with average lead time of 21 days and CV of 0.3 requires significantly more safety stock than one with the same average and CV of 0.1. Quantifying this in dollar terms - additional safety stock investment attributable to this supplier’s variability - creates a financial accountability mechanism beyond qualitative performance ratings.

Defect cost attribution:

Track quality failure costs at the supplier level:

Total Supplier Quality Cost = (Incoming Inspection Cost) + (Rework and Scrap from Supplier Defects) + (Customer Return Cost Attributable to Supplier Defects) + (Line Stoppage Cost from Component Quality Holds)

This total, compared against purchase volume, produces a quality cost per dollar of spend - a metric that frequently reveals that the lowest-price supplier is not the lowest total cost supplier.


Supply Chain Resilience Analytics

Supply chain resilience analytics is an emerging analytical capability that most organizations have not yet formalized. It answers the question: how vulnerable is our supply chain to disruption, and what is the expected financial impact of specific disruption scenarios?

The gap in competitive coverage is significant - no major analytics platform has built this into a coherent methodology. The COVID-19 pandemic and subsequent supply chain disruptions created enormous demand for this capability, yet most organizations still respond reactively rather than managing resilience proactively.

Resilience dimensions:

Supply concentration risk: Single-source dependency analysis.

Single Source Spend % = Spend with Single-Source Suppliers / Total Category Spend × 100
Geographic Concentration Index = HHI of Supplier Spend by Country = Σ(s²_i)

Where s_i is the share of spend in country i. A fully concentrated supply base has an HHI of 10,000. A diversified supply base approaches 0.

Supply chain depth mapping: Most organizations know their Tier 1 suppliers but have limited visibility to Tier 2 and Tier 3. A disruption at a Tier 2 supplier can cascade across multiple Tier 1 suppliers simultaneously. Resilience analytics maps supplier dependency networks to identify hidden concentrations - multiple Tier 1 suppliers who share a single Tier 2 sub-component source.

Disruption scenario modeling:

For each critical supply chain node (key supplier, key port, key distribution center), model the impact of a disruption event:

Disruption Impact Score = Probability of Disruption × Revenue at Risk × (1 - Mitigation Coverage)

Where:

  • Probability of Disruption incorporates historical disruption frequency, geographic risk scores, and supplier financial health signals
  • Revenue at Risk is the revenue dependent on the node’s continued operation during the disruption recovery period
  • Mitigation Coverage is the fraction of demand that can be redirected to alternative sources within the recovery window

Ranking nodes by Disruption Impact Score creates a prioritized investment agenda for resilience improvement.

Recovery time analysis:

Time to Recovery = Time to Identify Disruption + Time to Activate Alternative Supply + Time to Reach Normal Capacity

For each critical node, document the theoretical recovery time under different disruption scenarios. Nodes with long recovery times and high revenue dependency are the highest priority for buffer stock investment or pre-qualified alternative sourcing.

Resilience investment ROI:

Resilience investments (dual sourcing, strategic buffer stock, pre-qualified alternative routes) carry a cost. The ROI calculation compares that cost against the expected value of disruptions avoided:

Resilience Investment ROI = (Expected Annual Disruption Cost Without Investment - Expected Annual Disruption Cost With Investment) / Annual Investment Cost

This calculation, while imprecise, creates an economic framework for resilience decisions that goes beyond qualitative risk assessments.


Sustainable Supply Chain Analytics

Sustainable supply chain analytics measures the environmental and social impact of supply chain operations and embeds sustainability performance into the same analytical frameworks as cost and service level. This capability is rapidly transitioning from competitive differentiator to regulatory requirement in many jurisdictions.

No major analytics competitor covers this as a standalone methodology. That is a significant gap given the regulatory environment: EU Corporate Sustainability Reporting Directive (CSRD), SEC climate disclosure rules, and customer carbon audit requirements are forcing this capability into mainstream supply chain operations.

Carbon footprint measurement:

Supply chain emissions are categorized under GHG Protocol Scope 3, Category 4 (upstream transportation) and Category 9 (downstream transportation):

Transportation Emissions = Activity Data × Emission Factor

Activity Data = Tonne-Kilometers (freight weight × distance)
Emission Factor = kg CO₂e per Tonne-Kilometer (varies by mode and fuel type)

Approximate emission factors:

  • Ocean freight: 0.010-0.016 kg CO₂e per tonne-km
  • Air freight: 0.550-0.850 kg CO₂e per tonne-km
  • Road freight (full truckload): 0.062-0.097 kg CO₂e per tonne-km
  • Rail freight: 0.022-0.028 kg CO₂e per tonne-km

Mode shift analytics - the savings from converting air freight to ocean or LTL to FTL - can be expressed simultaneously in carbon and cost terms, making the business case transparent.

Supplier sustainability scorecarding:

Incorporate sustainability dimensions into supplier scorecards:

  • Carbon intensity (kg CO₂e per unit supplied)
  • Renewable energy percentage at production facilities
  • Water consumption intensity
  • Labor practice audit scores (against standards like SA8000 or Sedex SMETA)
  • Supplier Tier 1 sustainability disclosure compliance rate

Circular supply chain analytics:

For organizations with take-back or refurbishment programs, track:

Recovery Rate = (Units Recovered and Reused or Recycled) / (Units Sold in Recovery-Eligible Channels) × 100
Landfill Diversion Rate = (Units Diverted from Landfill) / (Total End-of-Life Units) × 100
Remanufacturing Yield = (Units Returned to Saleable Condition) / (Total Units Received for Remanufacture) × 100

Carbon cost integration:

As carbon pricing mechanisms expand, integrating a shadow carbon price into supply chain cost models enables transportation and sourcing decisions that anticipate regulatory cost:

Total Landed Cost (Carbon-Adjusted) = Direct Procurement Cost + Logistics Cost + (Carbon Emissions × Shadow Carbon Price)

A shadow price of $50-100/tonne CO₂e (consistent with major carbon market pricing trajectories) materially changes mode selection decisions, particularly between air and ocean freight where the emission intensity difference is roughly 50:1.


Real-Time Visibility and Supply Chain Control Towers

A supply chain control tower is an integrated visibility platform that aggregates real-time data across the supply chain network - from supplier production status through in-transit shipments to warehouse inventory positions and customer order status - and enables proactive exception management.

What distinguishes a control tower from a dashboard:

A dashboard shows you what happened. A control tower shows you what is happening and what is about to happen. The control tower adds three capabilities that dashboards lack: real-time event ingestion (not batch reporting), exception alerting with business rule-based triggers, and workflow integration that routes exceptions to the right person with context for action.

Control tower architecture layers:

Data ingestion layer: Real-time streams from TMS tracking, carrier APIs, WMS, ERP MRP exceptions, and supplier portal updates. Latency target: 5-15 minutes for operational visibility data.

Event processing layer: Pattern detection and exception generation. Business rules that translate raw data into actionable signals: “Shipment for PO 45892 departed supplier 4 days late and will not arrive before the scheduled production start date” or “Inventory for SKU A1102 at DC-East will deplete below safety stock in 6 days at current demand rate with no inbound replenishment confirmed.”

Decision support layer: For each exception, the control tower surfaces recommended actions with predicted outcomes. “Expedite via air freight at estimated additional cost of $8,400 to avoid $45,000 production stoppage cost” versus “Accept 4-day production delay and implement customer communication protocol.”

Workflow and resolution layer: Exception assignment to owner, resolution tracking, and escalation logic when exceptions are not resolved within threshold time windows.

Key performance indicators for control tower effectiveness:

Exception Resolution Rate = Exceptions Resolved Within SLA / Total Exceptions Generated × 100
Mean Time to Resolution (MTTR) = Average Time from Exception Trigger to Resolution Confirmation
Disruption Prevention Rate = Disruptions Prevented Through Proactive Intervention / Total Disruption Events

AI-Driven Supply Chain Planning

AI is transforming supply chain planning in ways that represent genuine capability step-changes - not incremental improvements on existing methods, but qualitatively different approaches to planning problems that were previously intractable.

Autonomous replenishment:

Machine learning models that continuously observe demand signals (point-of-sale data, web traffic, social signals), supply signals (inventory positions, in-transit inventory, confirmed purchase orders), and external signals (weather, events, economic indicators) and generate replenishment recommendations or execute replenishment orders without human review for routine situations.

The key design question is human-in-the-loop versus fully automated. Most organizations begin with AI-generated recommendations that humans approve, then progressively automate the tail of small, routine replenishment decisions while keeping humans on high-value, high-risk replenishment decisions.

Dynamic pricing and supply allocation:

In constrained supply situations - allocations, product launches, capacity-limited periods - AI-driven allocation models optimize across competing customer demands to maximize total contribution margin or prioritize strategic customers:

Optimal Allocation = argmax Σ(j) (Price_j - Cost_j) × x_j
Subject to: Σ(j) x_j ≤ Available Supply
            x_j ≤ Customer Demand_j

Generative AI for supply chain planning:

Large language model integration in supply chain planning platforms is moving from experimental to production in leading organizations. Current practical applications include: natural language querying of supply chain data (“What is my OTIF performance for Carrier X on the Dallas-Chicago lane over the past 90 days?”), automated exception narrative generation for supply chain review meetings, and supply chain disruption news monitoring with automated impact assessment.

The important caution: AI planning tools require high-quality, integrated data to produce reliable outputs. An AI replenishment engine operating on inaccurate inventory data or a biased demand forecast will make poor decisions at scale. The investment in data quality and integration described in Data Sources is the prerequisite for AI capability, not an alternative to it.

Digital twin modeling:

A supply chain digital twin is a computational model that represents the entire supply chain network with sufficient fidelity to simulate the impact of decisions before they are made. Digital twins enable scenario testing: “If I move 20% of our Eastern DC inventory to the Midwest DC, what happens to fill rates and freight costs across customer segments?” without touching live operations.

Leading implementations ingest real-time data to keep the twin synchronized with actual network state, enabling not just planning simulation but real-time “what-if” analysis during disruption response. The Gartner Supply Chain Top 25 consistently includes organizations with mature digital twin capabilities - Amazon, Walmart, Unilever, P&G - who use them to make network decisions that would take competitors months to analyze.

The progression from descriptive dashboards to prescriptive AI-driven planning is achievable for most organizations within 3-5 years with the right data infrastructure and analytical talent. Each technique described in this article builds on the previous: demand forecasting accuracy unlocks inventory optimization; inventory optimization enables network optimization; real-time visibility enables control tower management; and clean, integrated data across all of these enables AI-driven autonomous planning.

Get More from D-LIT

Ready to transform your analytics capabilities? Talk to our team about how D-LIT can help your organisation make better, data-driven decisions.

Get in Touch