Writing industry
industry 11 min read 20 January 2024

Data Science in Supply Chain: What the Models Actually Do

A practitioner's overview of how ML is applied in supply chain — from demand forecasting and inventory optimization to markdown pricing and fulfillment capacity, with what the models can and can't solve.

After a year building machine learning systems at Blue Yonder — a company that processes supply-chain data for some of the world’s largest retailers — I have a clear picture of what data science actually does in supply chain, where it works well, and where it disappoints.

The short version: supply chain ML is mostly time-series forecasting at scale, with a heavy operations research component, deployed in environments where being 3% more accurate can translate to millions of dollars in inventory efficiency. It’s not glamorous. It’s also genuinely hard.

The Core Problems

Supply chain data science clusters around a set of recurring problem types. Each has its own data characteristics, objectives, and failure modes.

Demand Forecasting

The foundation of almost everything in supply chain. If you can predict demand accurately at the right granularity (SKU × store × day), every downstream decision improves.

The challenge: demand is non-stationary, hierarchical, and affected by interventions (promotions, pricing changes, new product launches, competitor actions). A model trained on stable historical patterns fails when patterns change.

What actually works:

What doesn’t work:

Inventory Optimization

Demand forecasting tells you what you expect to sell. Inventory optimization tells you what to stock given that forecast and its uncertainty.

The classical framework: set a safety stock level that balances the cost of a stockout (lost sale, customer dissatisfaction) against the cost of excess inventory (holding costs, obsolescence). Safety stock is a function of demand forecast uncertainty and supplier lead time uncertainty.

Where ML adds value: better demand uncertainty estimates (not just a point forecast but a distribution), dynamic safety stock levels that adapt to current conditions, and multi-echelon optimization (coordinating inventory decisions across distribution centers and stores).

Replenishment Forecasting

How much to order, when, from which supplier. Related to inventory optimization but focused on the ordering decision rather than the stocking level.

The hard parts:

Markdown Optimization

When products aren’t selling at full price, when and how deeply do you discount?

This is a price response modeling problem: given historical sales at different price points, build a demand-at-price model and use it to find the markdown schedule that maximizes revenue recovery (or profit, depending on the objective).

The complications: end-of-season markdown decisions are under time pressure. You can’t run a long A/B test when the product goes out of season in 4 weeks. And demand at a discounted price is affected by competitors’ discounting, not just your own pricing.

Fulfillment Capacity Planning

For retailers with fulfillment centers processing both store replenishment and direct-to-consumer orders: predicting how much capacity will be needed each day or week.

This is a multi-source demand aggregation and forecasting problem. The inputs are retailer orders (partially predictable from their ordering patterns) plus consumer orders (more variable). Getting this right prevents both costly overtime and understaffing.

The Tech Stack in Practice

At Blue Yonder, the production stack:

Data pipeline: Apache Beam on Google Cloud Dataflow for bulk ingestion, cleaning, and feature engineering. The data volumes (5TB+ for active clients) make local processing impractical. Beam’s dataflow model (transforms → pipeline → parallel execution) worked well for the batch-heavy workloads.

Feature engineering: A feature store to prevent training/serving skew. The same feature computation code runs at training time and serving time — a discipline that took significant engineering effort to establish.

Models: A mix. XGBoost and LightGBM for tabular forecasting with engineered features. TensorFlow and Keras for deep learning models (LSTM-based for longer horizon forecasts). TFX (TensorFlow Extended) for the ML pipeline orchestration.

Serving: BigQuery for batch inference (compute forecasts overnight, store results). Kubernetes endpoints for real-time serving where latency requirements demanded it.

Monitoring: Data drift detection, forecast accuracy tracking against actuals, business metric dashboards.

What Data Science Actually Changes

At the SKU × store × day level, moving demand forecast accuracy from MAE of 30% to 20% sounds like a modeling achievement. What it actually unlocks:

For a retailer doing $1B/year in a category, a 2% inventory reduction from better forecasting is $20M in freed-up capital. The math makes it obvious why this is worth investing in.

What Data Science Can’t Fix

Data quality problems. Garbage in, garbage out. If your historical sales data has missing records, duplicate entries, or systematic biases (e.g., demand was constrained by stockouts, so recorded sales understate true demand), your forecasts will be wrong in ways that are hard to detect.

Organizational resistance to model outputs. Buyers and planners often override model recommendations based on intuition. This isn’t always wrong — they have information the model doesn’t have — but systematic overriding without feedback to the model is wasted signal.

The new product problem. Cold start for new products (no sales history) requires either analogical forecasting (similar products’ launch patterns) or expert input. Pure ML models are useless without data.

Genuinely unprecedented events. COVID-19 demand patterns broke every model that wasn’t retrained on current data. Models trained on historical patterns are assumptions about future patterns holding. When they don’t, you need human judgment.

The Competitive Reality

The large supply chain software vendors (Blue Yonder, o9, Kinaxis, e2open) compete on ML quality among other things. But for any individual retailer, the first-order question is not “do we have the best model?” — it’s “are we using data at all?” Many supply chain decisions are still made on spreadsheets and intuition. Moving from there to basic data-driven forecasting is the biggest jump. The difference between a good model and a great model is much smaller.

supply-chain demand-forecasting inventory industry
← All articles

Lets collaborate!

Whether you need a quantitative researcher, an machine learning systems builder, or a technical advisor — I'm available for select consulting engagements.

Get in Touch →