Writing ml
ml 10 min read 1 February 2024

Overfitting Is Not a Model Problem, It's a Thinking Problem

The bias-variance tradeoff reframed as a failure of reasoning, not tuning. Why overfitting in quantitative finance is uniquely dangerous, and how to detect and prevent it systematically.

Every ML practitioner knows what overfitting is. Fewer understand why it keeps happening to experienced practitioners and not just beginners.

The technical definition is clear: a model that fits training data well but fails to generalize to new data. The cause is equally clear: too much model complexity relative to the amount of data. The fix seems obvious: regularization, cross-validation, early stopping.

So why does overfitting remain so pervasive in practice? Because it’s not really a tuning problem. It’s a thinking problem.

The Bias-Variance Decomposition

Expected prediction error decomposes into three terms:

$$\text{Error} = \text{Bias}^2 + \text{Variance} + \text{Irreducible Noise}$$

Bias is systematic error. A model with high bias makes the same kinds of mistakes regardless of which training data it sees. It can’t capture the true pattern. Underfit.

Variance is sensitivity to the training data. A model with high variance gives very different predictions when trained on different samples drawn from the same distribution. Overfit.

Irreducible noise is the variance in the labels themselves — unpredictability in the real world that no model can capture regardless of complexity.

The tradeoff: as you increase model complexity, bias decreases and variance increases. Somewhere in between is the optimal complexity. The task of model selection is finding where that is.

But here’s the part that turns this from a tuning problem into a thinking problem: you can overfit on any part of the modeling pipeline, not just the model itself.

Where Overfitting Actually Happens

Feature Engineering

Every feature you engineer and include creates an opportunity to overfit. When you do exploratory analysis on the full dataset and then engineer features based on what you see — correlations, patterns, anomalies — you’ve implicitly fit those features to the test set.

This is called data leakage from exploration. The features look predictive in your analysis. They won’t generalize.

Fix: freeze a holdout set before you start exploring. Never look at it until final evaluation.

Hyperparameter Tuning

Cross-validation reduces overfitting on training data. But if you tune hyperparameters with cross-validation and report CV performance as your estimate, you’ve overfit to the validation sets. You’ve selected the hyperparameters that happened to work best on those particular validation folds.

Fix: nested cross-validation, or a truly held-out test set that is touched only once.

Model Selection

If you try ten models and pick the one with the best test-set accuracy, your “test” accuracy is no longer an unbiased estimate of generalization. You’ve used the test set as a validation set.

Fix: have a separate, untouched holdout that you evaluate exactly once, after all model selection and tuning is complete.

Metrics Selection

If you evaluate many metrics and report the one that looks best, you’ve overfit to metrics. If you tune your model to maximize AUC and then note that it also has good F1 and accuracy, those numbers are optimistically biased.

Fix: specify evaluation metrics before training. Report all prespecified metrics regardless of how they look.

The pattern across all these: any decision made by looking at outcomes creates overfitting, even if the decision seems principled. The model doesn’t have to memorize training data directly. You can do the overfitting yourself through your choices.

Regularization: What It’s Actually Doing

Regularization adds a penalty for model complexity to the training objective:

$$L_{\text{regularized}} = L_{\text{training}} + \lambda \cdot \text{complexity penalty}$$

L2 (Ridge): penalty is the sum of squared weights. Pulls weights toward zero, but rarely to exactly zero. Effective for correlated features.

L1 (Lasso): penalty is the sum of absolute weights. Pulls weights all the way to zero for unimportant features. Performs implicit feature selection.

Elastic Net: combines both. Good default when you have correlated features and want sparsity.

Dropout (neural networks): randomly zero out neurons during training. Forces the network to learn redundant representations — no single neuron can be relied upon.

Early stopping: stop training when validation loss starts increasing. Equivalent to a form of regularization that limits how far gradient descent travels from initialization.

What regularization does conceptually: it constrains the model to be simpler than the data would naturally allow. The constraint is encoded through the penalty term and the hyperparameter $\lambda$. Higher $\lambda$ = more constraint = less overfitting = more underfitting risk.

Regularization is a partial fix. It addresses model complexity; it doesn’t address the other forms of overfitting above.

Cross-Validation Done Right

k-fold cross-validation trains on k-1 folds and validates on the held-out fold, rotating until every example has been used as validation once. The reported metric is the average across folds.

What it does: reduces variance in the evaluation estimate. A single train-test split might be unlucky (easy test set or hard test set). Averaging across many splits is more stable.

What it doesn’t do: guarantee generalization to the true deployment distribution. If your deployment data comes from a different time period, geography, or user population than your training data, cross-validation over the full dataset won’t catch the distribution shift.

For time series data: never use random k-fold. Use forward-chaining (walk-forward validation) — train on past data, validate on future data. The order matters.

Overfitting in Quantitative Finance: Uniquely Dangerous

In most ML applications, overfitting means your model underperforms in production. You discover this gradually, recalibrate, and move on. In quantitative trading, overfitting can mean:

  1. Trading a strategy that looks profitable but isn’t. You allocate capital, pay transaction costs, and lose money.
  2. Fooling yourself that you have alpha. The pattern you found was noise in the historical data. It will not repeat.
  3. Systematic capital destruction before you realize the strategy is broken.

The quant world has specific vocabulary for this: p-hacking, backtest overfitting, and strategy mining.

The problem is severe because:

The Multiple Testing Problem

If you test 100 independent strategies with p < 0.05 significance threshold, you expect 5 to appear significant by chance. If you report only the best one, you’re reporting a false discovery.

The Bonferroni correction: divide your significance threshold by the number of tests. If testing 100 strategies, require p < 0.0005 for any individual one to be credible.

In practice, the number of “tests” you’ve implicitly run includes all the variations you considered but didn’t formally test. That number is often much larger than people account for.

Minimum Backtest Length

A practical rule of thumb: the minimum number of independent years of backtest data needed to detect a strategy with Sharpe ratio S at significance level p is approximately:

$$T \geq \frac{(z_\alpha + z_\beta)^2}{S^2}$$

For a Sharpe of 1.0 at 95% confidence, you need roughly 4 years. For Sharpe of 0.5, you need 16 years. Most quants don’t have 16 years of data for the strategies they’re testing.

This means most backtest results are underpowered by construction. The conclusion: be extremely skeptical of backtests, especially for low-Sharpe strategies.

Detection: Signs You’ve Overfit

  1. Training error much lower than validation error. The gap tells you how overfit you are.
  2. Performance degrades significantly in out-of-sample testing. Not a little — a lot. A well-generalized model should degrade gracefully.
  3. Hyperparameter sensitivity. A small change in regularization strength causes large changes in model behavior. Overfit models are fragile.
  4. Performance varies wildly across cross-validation folds. High variance in fold-level results is a sign the model is fitting noise.
  5. Feature importances are concentrated on noisy features. If your top features are ones with many missing values, high cardinality, or suspicious correlation with the target, suspect leakage or overfitting.

The Mental Model

Think of the training-test gap not as a technical problem but as evidence that your model’s complexity is wrong for your data. The question is not “how do I reduce overfitting?” but “what is the right level of complexity for this data?”

That question is answered by:

Overfitting is fundamentally a problem of using knowledge you shouldn’t have — future data, test labels, or patterns that only exist in your specific historical sample. Treat it as a discipline problem, not a tuning problem, and you’ll catch it in more places.

overfitting bias-variance regularization cross-validation quantitative-finance generalization
← All articles

Lets collaborate!

Whether you need a quantitative researcher, an machine learning systems builder, or a technical advisor — I'm available for select consulting engagements.

Get in Touch →