Escape local minima where gradient descent fails. Population-based radial dynamics with adaptive momentum for neural networks and complex optimization landscapes.
Dual-lane population dynamics with radial scaling and inter-lane momentum exchange.
Dynamic inward/outward phases balance exploration and exploitation. Starts broad, narrows to precision.
64+ agents explore simultaneously. Leader-trailer dynamics share information across the population.
Adaptive chase mechanism transfers momentum between lanes, accelerating convergence to global optima.
Built-in stability with gradient clipping prevents explosive updates on ill-conditioned landscapes.
Learning rate adjusts based on loss gap between leaders. Speeds up in flat regions, slows near optima.
Top 20% of agents preserved across iterations. Never lose your best solutions.
When standard methods get stuck, DRD finds a way.
| Feature | SGD/Adam | CMA-ES | DRD |
|---|---|---|---|
| Escapes Local Minima | ✗ Poor | ✓ Good | ✓ Excellent |
| High-Dimensional (1000+) | ✓ Excellent | ✗ Struggles | ✓ Good |
| Ill-Conditioned Problems | ✗ Unstable | ✓ Good | ✓ Excellent |
| Neural Network Training | ✓ Standard | ✗ Slow | ✓ Competitive |
| No Hyperparameter Tuning | ✗ Heavy tuning | ✓ Minimal | ✓ Self-adaptive |
DRD excels on non-convex landscapes where gradient methods fail.
Optimize hyperparameters and architecture choices across complex, non-smooth loss landscapes.
Fit complex physical models with many local minima. Energy minimization, molecular dynamics.
Calibrate financial models, simulation parameters, and sensor systems with noisy objectives.
Policy optimization on complex reward landscapes. Escape deceptive local optima.
Use DRD anywhere you'd use Adam or SGD. Same interface, better global optimization.
import thalosforge as tf
import numpy as np
# Define non-convex objective (Rastrigin)
def rastrigin(x):
A = 10
return A * len(x) + sum(
xi**2 - A * np.cos(2 * np.pi * xi)
for xi in x
)
# Run DRD
optimizer = tf.DRDOptimizer(
loss_fn=rastrigin,
dimensions=10,
population_size=64,
R_start=3.0,
adaptive_chase=True
)
result = optimizer.optimize(max_iterations=1000)
print(f"Best loss: {result.best_loss:.6f}")
print(f"Best solution: {result.best_params}")
print(f"Iterations: {result.iterations}")
print(f"Escape events: {result.escape_count}")
Let DRD explore the full landscape while you focus on building.