From Quantreo to Oryon
Latency does not start at execution. It starts upstream, in the feature pipeline.
Most trading systems do not lose performance at execution time. They lose it upstream, in the way features are computed, updated, and maintained. In research, full recomputation is acceptable. In production, it becomes a structural inefficiency.
Every unnecessary pass over the data, every oversized state, every hidden dependency in the pipeline introduces latency that compounds over time. Not always visible, but always present.
Oryon was designed to remove that layer entirely.
Not by optimizing it. By redefining it.
1. Rethinking Feature Computation
Oryon is the beta 2 of Quantreo, but more importantly, it introduces a different abstraction. Features are no longer batch transformations. They are stateful, incremental objects.
Each update processes only the newest observation, maintains only the strictly required state, and returns the current value immediately.
from oryon.features import Sma
sma = Sma(["close"], window=3, outputs=["sma_3"])
sma.update([100.0]) # → [nan]
sma.update([101.0]) # → [nan]
sma.update([102.0]) # → [101.0]
sma.update([103.0]) # → [102.0]The important part is not the indicator itself. A 3-period moving average stores exactly three values. No history, no recomputation, no hidden overhead.
This same design extends consistently across the library:
entropy-based features
ADF-style statistical tests
multiple volatility estimators
rolling and dynamic correlations
operators, transformations, and scalers
All built around the same constraint:
→ Compute once, update incrementally, stay minimal in memory.
2. Performance where it actually matters
Oryon is built on a Rust-backed core, exposed through a minimal Python interface.
This is not an implementation detail. It is a design requirement.
Under production conditions:
fast paths run below 1 microsecond
the slowest feature update remains under 19 microseconds
a full feature state can be refreshed in less than 1 millisecond
This is where the difference becomes tangible.
Not in isolated benchmarks, but in continuous pipelines running live, where every microsecond accumulates.
3. One pipeline, from research to production
The same abstraction is used everywhere. The same objects.
The same update logic. The same behavior. There is no distinction between a research pipeline and a production pipeline. No reimplementation, no hidden mismatch, no translation layer.
What you build is what you run. This removes an entire class of errors that typically appears when moving from notebooks to live systems.
And it forces a discipline. Features must be correct, incremental, and production-ready from the start.
⇒ In the next newsletter, we’ll build a research pipeline that is natively streaming-ready and can be deployed as-is in production.
Oryon is now available in beta.
If you want to take a closer look at the library and its design, you can explore it directly on GitHub.
And if you find it useful, consider adding a star. it helps more than it seems.



