Don’t Add Complexity. Add Information.
Better Features Beat Bigger Models
Every quant researcher reaches a point where nothing works. No alpha emerges, no model improves, no backtest makes sense. The usual reaction is to add more complexity.
A deeper model, a new architecture, more tuning. Yet this rarely fixes anything. When a model fails, it is almost never because the model is too simple. It is because the information it receives is too weak to reveal anything useful.
The solution is not to increase complexity. It is to increase information.
1. Complexity Is Not the Cure
When research goes nowhere, adding complexity feels like the logical next step. A more advanced model, a larger network, an extra layer of regularization. In practice, this only makes the problem harder to diagnose. A complex model does not create information. It can only amplify whatever structure is already present in the data.
If that structure is weak or nonexistent, the model will either fit noise or collapse entirely. You end up with a backtest that looks unstable, a signal that disappears out of sample, or a model that learns patterns that are specific to one regime. Increasing complexity increases the risk of overfitting without increasing the quality of what the model learns.
When nothing works, the limitation is almost always the information content of the inputs, not the sophistication of the model.
2. What Actually Fixes the Problem. More Information
If a model cannot learn anything useful, the issue is not the architecture. It is the information available to it. The fastest way to unblock research is to increase the amount of meaningful structure in the data.
There are three ways to do that.
First, expand the raw data. Add more assets, more horizons, or more context so the model sees a wider range of behaviors.
Second, refine the targets. A clearer target creates a clearer learning objective and reduces the noise the model has to fight.
Third, and most important, enrich the features. Good features reveal patterns that are invisible in the raw time series.
Information is the real bottleneck. Once you improve it, even simple models start producing signals that make sense.
You can also accelerate this step by using the Quantreo library, which provides a large set of ready to use features to enrich your data instantly.
3. Feature Engineering Is the Real Breakthrough
Features are the fastest way to increase the amount of usable information in your dataset. Raw prices contain very little structure. Features extract and highlight the behaviors that matter. Volatility patterns, microstructure signals, relative strength, regime shifts, liquidity imbalances. These transformations turn noise into something the model can actually learn from.
Most breakthroughs in research come from better features, not better models. A single well designed feature can reveal more signal than an entire stack of complex architectures. It creates clarity in the data, stabilizes the learning process, and reduces the sensitivity to market noise.
When nothing works, generate more features, test them carefully, and let the data show you where the structure exists. This is usually the point where research starts moving again.
When research stalls, the answer is rarely more complexity. It is more information. Better targets, richer data, and stronger features give your models something real to learn from. This approach reduces noise, exposes structure, and turns a stagnant process into a productive one.
If you want to speed up this step and work with features that already capture robust market behaviors, the Quantreo library and the AI Trading Lab give you everything you need to enrich your data and restart your research with a much stronger foundation.



Spot on about feature engineering being the real unlock. Back when I worked on equity signals, we wasted months tuning layer depths before realising the issue was noisy microstruture features that didn't capture liquidity regimes properly. Swapping in volume-adjusted spreads and orderbook imbalance instantly improved Sharpe by 0.4, no architecture changes needed. The overfit trap is real when the signal isn't there from the start.