has a difficult secret. Organizations deploy fashions that obtain 98% accuracy in validation, then watch them quietly degrade in manufacturing. The group calls it “idea drift” and strikes on. However what if this isn’t a mysterious phenomenon — what if it’s a predictable consequence of how we optimize?
I began asking this query after watching one other manufacturing mannequin fail. The reply led someplace surprising: the geometry we use for optimization determines whether or not fashions keep steady as distributions shift. Not the info. Not the hyperparameters. The house itself.
I spotted that credit score threat is basically a rating downside, not a classification downside. You don’t must predict “default” or “no default” with 98% accuracy. It’s good to order debtors by threat: Is Borrower A riskier than Borrower B? If the economic system deteriorates, who defaults first?
Customary approaches miss this utterly. Right here’s what gradient-boosted timber (XGBoost, the sphere’s favourite software) really obtain on the Freddie Mac Single-Family Loan-Level Dataset (692,640 loans spanning 1999–2023):
- Accuracy: 98.7% ← appears spectacular
- AUC (rating skill): 60.7% ← barely higher than random
- 12 months later: 96.6% accuracy, however rating degrades
- 36 months later: 93.2% accuracy, AUC is 66.7% (primarily ineffective)
XGBoost achieves an spectacular accuracy however fails on the precise activity: ordering threat. And it degrades predictably.
Now evaluate this to what I’ve developed (introduced in a paper accepted in IEEE DSA2025):
- Preliminary AUC: 80.3%
- 12 months later: 76.4%
- 36 months later: 69.7%
- 60 months later: 69.7%
The distinction: XGBoost loses 32 AUC factors over 60 months. Our strategy? Simply 10.6 factors in AUC — (Area Under de Curve) is what’s going to inform us how our educated algorithm will predict threat on unseen knowledge.
Why does this occur? It comes right down to one thing surprising: the geometry of optimization itself.
Why This Issues (Even If You’re Not in Finance)
This isn’t nearly credit score scores. Any system the place rating issues greater than precise predictions faces this downside:
- Medical threat stratification — Who wants pressing care first?
- Buyer churn prediction — Which clients ought to we focus retention efforts on?
- Content material suggestion — What ought to we present subsequent?
- Fraud detection — Which transactions advantage human evaluation?
- Provide chain prioritization — Which disruptions to deal with first?
When your context adjustments progressively — and whose doesn’t? — accuracy metrics misinform you. A mannequin can preserve 95% accuracy whereas utterly scrambling the order of who’s really at highest threat.
That’s not a mannequin degradation downside. That’s an optimization downside.
What Physics Teaches Us About Stability
Take into consideration GPS navigation. In the event you solely optimize for “shortest present route,” you may information somebody onto a street that’s about to shut. However when you protect the construction of how site visitors flows — the relationships between routes — you’ll be able to preserve good steerage whilst circumstances change. That’s what we’d like for credit score fashions. However how do you protect construction?
NASA has confronted this precise downside for years. When simulating planetary orbits over thousands and thousands of years, normal computational strategies make planets slowly drift — not due to physics, however due to gathered numerical errors. Mercury progressively spirals into the Solar. Jupiter drifts outward. They solved this with symplectic integrators: algorithms that protect the geometric construction of the system. The orbits keep steady as a result of the strategy respects what physicists name “part house quantity” — it maintains the relationships between positions and velocities.
Now right here’s the stunning half: credit score threat has the same construction.
The Geometry of Rankings
Customary gradient descent optimizes in Euclidean house. It finds native minima in your coaching distribution. However Euclidean geometry doesn’t protect relative orderings when distributions shift.
What does?
Symplectic manifolds.
In Hamiltonian mechanics (a formalism utilized in physics), conservative programs (no vitality loss) evolve on symplectic manifolds — areas with a 2-form construction that preserves part house quantity (Liouville’s theorem).
On this part house, symplectic transformations protect relative distances. Not absolute positions, however orderings. Precisely what we’d like for rating beneath distribution shift. While you simulate a frictionless pendulum utilizing normal integration strategies, vitality drifts. The pendulum in Determine 1 slowly hastens or slows down — not due to physics, however due to numerical approximation. Symplectic integrators don’t have this downside as a result of they protect the Hamiltonian construction precisely. The identical precept may be utilized to neural community optimization.

Protein folding simulations face the identical downside. You’re modeling hundreds of atoms interacting over microseconds to milliseconds — billions of integration steps. Customary integrators accumulate vitality: molecules warmth up artificially, bonds break that shouldn’t, the simulation explodes.

The Implementation: Construction-Preserving Optimization
Right here’s what I really did:
Hamiltonian Framework for Neural Networks
I reformulated neural community coaching as a Hamiltonian system:

In Mechanical programs, T(p) is the kinetic vitality time period, and V(q) is the ’potential vitality. On this analogy T(p) represents the price of altering the mannequin parameters, and V(q) represents the loss perform of the present mannequin state.
Symplectic Euler optimizer (not Adam/SGD):
As a substitute of Adam or SGD for optimizing, I take advantage of a symplectic integration:

I’ve used the symplectic Euler methodology for a Hamiltonian system with place q and momentum p
The place:
- H is the Hamiltonian (vitality perform derived from the loss)
- Δt is the time step (analogous to studying price)
- q are the community weights (place coordinates), and
- p are momentum variables (velocity coordinates)
Discover that p_{t+1} seems in each updates. This coupling is essential — it’s what preserves the symplectic construction. This isn’t simply momentum; it’s structure-preserving integration.
Hamiltonian-constrained loss
Furthermore, I’ve created a loss primarily based on the Hamiltonian formalism:

The place:
- L_base(θ) is binary cross-entropy loss
- R(θ) is regularization time period (L2 penalty on weights), and
- λ is regularization coefficient
The regularization time period penalizes deviations from vitality conservation, constraining optimization to low-dimensional manifolds in parameter house.
How It Works
The mechanism has three parts:
- Symplectic construction → quantity preservation → bounded parameter exploration
- Hamiltonian constraint → vitality conservation → steady long-term dynamics
- Coupled updates → preserves geometric construction related for rating
This construction is represented within the following algorithm

The Outcomes: 3x Higher Temporal Stability
As defined, I examined this framework utilizing Freddie Mac Single-Family Loan-Level Dataset — the one long-term credit score dataset with correct temporal splits spanning financial cycles.

The logic inform us that accuracy has to lower throughout the three datasets (from 12 to 60 months). Lengthy horizon predictions use to be much less correct than brief time period. However what we see is that XGBoost doesn’t observe this sample (AUC values from 0.61 to 0.67 — that is the signature of optimization within the incorrect house)- Our symplectic optimizer, regardless of exhibiting much less accuracy, does it (AUC values lower from 0.84 to 0.70). For instance, what does assure you {that a} prediction for 36 goes to extra real looking? The 0.97 accuracy of XGBoost or the 0,77 AUC worth from the Hamiltonian impressed strategy? XGBoost has for 36 months an AUC of 0.63 (very near a random prediction).
What Every Element Contributes
In our ablation examine, all parts contribute, with momentum in symplectic house offering bigger beneficial properties. This aligns with the theoretical backgroun— the symplectic 2-form is preserved via coupled position-momentum updates.

When to Use This Strategy
Use symplectic optimization as alyternative to gradient descent optimizers when:
- Rating issues greater than classification accuracy
- Distribution shift is gradual and predictable (financial cycles, not black swans)
- Temporal stability is crucial (monetary threat, medical prognosis over time)
- Retraining is pricey (regulatory validation, approval overhead)
- You may afford 2–3x coaching time for manufacturing stability
- You will have <10K options (works effectively as much as ~10K dimensions)
Don’t Use When:
- Distribution shift is abrupt/unpredictable (market crashes, regime adjustments)
- You want interpretability for compliance (this doesn’t assist with explainability)
- You’re in ultra-high dimensions (>10K options, price turns into prohibitive)
- Actual-time coaching constraints (2–3x slower than Adam)
What This Truly Means for Manufacturing Methods
For organizations deploying credit score fashions or related challenges:
Downside: You retrain quarterly. Every time, you validate on holdout knowledge, see 97%+ accuracy, deploy, and watch AUC degrade over 12–18 months. You blame “market circumstances” and retrain once more.
Resolution: Use symplectic optimization. Settle for barely decrease peak accuracy (80% vs 98%) in trade for 3x instances higher temporal stability. Your mannequin stays dependable longer. You retrain much less typically. Regulatory explanations are less complicated: “Our mannequin maintains rating stability beneath distribution shift.”
Price: 2–3x longer coaching time. For month-to-month or quarterly retraining, that is acceptable — you’re buying and selling hours of compute for months of stability.
That is engineering, not magic. We’re optimizing in an area that preserves what really issues for the enterprise downside.
The Greater Image
Mannequin degradation isn’t inevitable. It’s a consequence of optimizing within the incorrect house. Customary gradient descent finds options that work in your present distribution. Symplectic optimization finds options that protect construction — the relationships between examples that decide rankings. Our proposed strategy received’t remedy each downside in ML. However for the practitioner watching their manufacturing mannequin decay — for the group dealing with regulatory questions on mannequin stability — it’s an answer that works in the present day.
Subsequent Steps
The code is out there: [link]
The total paper: Might be out there quickly. Contact me in case you are fascinated by receiving it ([email protected])
Questions or collaboration: In the event you’re engaged on rating issues with temporal stability necessities, I’d have an interest to listen to about your use case.
Thanks for studying — and sharing!
Need assistance implementing this type of programs?
Javier Marin
Utilized AI Guide | Manufacturing AI Methods + Regulatory Compliance
[email protected]

