Step-by-step guides to help you get the most out of ThalosForge.
Install ThalosForge and run your first optimization in under 10 minutes.
Learn when to use QuantumJolt, DSS, or Kestrel for your specific problem.
Use QuantumJolt to tune neural network hyperparameters in high-dimensional spaces.
Optimize expensive CFD simulations with DSS using minimal function evaluations.
Solve engineering problems with inequality and equality constraints using Kestrel.
Generate Pareto fronts for problems with competing objectives.
Keep fraud detection models accurate as fraud patterns evolve, without retraining.
Maintain sensor accuracy as environmental conditions change.
This tutorial will get you from zero to running your first ThalosForge optimization in under 10 minutes.
Install ThalosForge using pip:
pip install thalosforge
Verify the installation:
import thalosforge as tf
print(tf.__version__) # Should print version number
Let's optimize the classic Rastrigin function, a challenging multimodal benchmark:
import numpy as np
def rastrigin(x):
"""Rastrigin function - global minimum at origin"""
A = 10
return A * len(x) + np.sum(x**2 - A * np.cos(2 * np.pi * x))
# Test it
print(rastrigin(np.zeros(10))) # 0.0 (global minimum)
print(rastrigin(np.ones(10))) # ~10 (not optimal)
Use ThalosForge to find the global minimum:
import thalosforge as tf
# Define search bounds (10 dimensions)
bounds = [(-5.12, 5.12)] * 10
# Run optimization
result = tf.optimize(
rastrigin,
bounds=bounds,
method='quantumjolt',
max_evals=2000
)
# Print results
print(f"Best value found: {result.fun:.6f}")
print(f"Best solution: {result.x}")
print(f"Evaluations used: {result.nfev}")
Try different optimizers to see how they perform:
# Try DSS (deterministic)
result_dss = tf.optimize(
rastrigin,
bounds=bounds,
method='dss',
max_evals=2000
)
print(f"DSS result: {result_dss.fun:.6f}")
# Compare with SciPy
from scipy.optimize import differential_evolution
scipy_result = differential_evolution(
rastrigin,
bounds,
maxiter=200,
seed=42
)
print(f"SciPy result: {scipy_result.fun:.6f}")
Now that you've run your first optimization, explore these topics:
This tutorial shows how to use QuantumJolt for high-dimensional hyperparameter optimization of machine learning models.
Hyperparameter optimization is challenging because:
import numpy as np
import thalosforge as tf
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import make_classification
from sklearn.model_selection import cross_val_score
# Generate synthetic data
X, y = make_classification(
n_samples=1000,
n_features=20,
n_informative=10,
random_state=42
)
def objective(params):
"""
Objective function for RF hyperparameter tuning.
We minimize negative accuracy (to maximize accuracy).
"""
# Unpack parameters
n_estimators = int(params[0])
max_depth = int(params[1])
min_samples_split = int(params[2])
min_samples_leaf = int(params[3])
max_features = params[4] # Continuous [0.1, 1.0]
# Create and evaluate model
model = RandomForestClassifier(
n_estimators=n_estimators,
max_depth=max_depth,
min_samples_split=min_samples_split,
min_samples_leaf=min_samples_leaf,
max_features=max_features,
random_state=42,
n_jobs=-1
)
# 5-fold cross-validation
scores = cross_val_score(model, X, y, cv=5, scoring='accuracy')
# Return negative accuracy (we minimize)
return -scores.mean()
# Define bounds for each hyperparameter
bounds = [
(10, 500), # n_estimators
(2, 50), # max_depth
(2, 20), # min_samples_split
(1, 10), # min_samples_leaf
(0.1, 1.0), # max_features
]
# Track progress
history = []
def callback(x, f, iteration):
history.append(f)
if iteration % 10 == 0:
print(f"Iteration {iteration}: accuracy = {-f:.4f}")
# Run QuantumJolt
result = tf.optimize(
objective,
bounds=bounds,
method='quantumjolt',
max_evals=100, # Each eval is expensive (CV)
callback=callback
)
# Extract best hyperparameters
best_params = {
'n_estimators': int(result.x[0]),
'max_depth': int(result.x[1]),
'min_samples_split': int(result.x[2]),
'min_samples_leaf': int(result.x[3]),
'max_features': result.x[4],
}
print(f"\nBest accuracy: {-result.fun:.4f}")
print(f"Best parameters: {best_params}")
Learn how to use Saturna to keep fraud detection models accurate as fraud patterns evolve—without retraining.
Fraud detection models face a unique challenge: fraudsters constantly change tactics. This causes:
from thalosforge import Saturna
import numpy as np
# Load your existing ONNX fraud model
saturna = Saturna(
model_path="fraud_detector.onnx",
lambda_blend=0.3, # 30% drift correction
temperature=0.9 # Soft modulation
)
Calibrate Saturna on your baseline (clean) data:
# Load baseline data from model training period
X_baseline = load_baseline_transactions() # Your data loading function
# Calibrate Saturna
calibration_result = saturna.calibrate(X_baseline)
print(calibration_result)
# {'status': 'calibrated', 'samples_used': 10000}
# In production, use Saturna for all predictions
def score_transaction(transaction_features):
"""Score a transaction with drift correction."""
# Saturna automatically corrects for drift
fraud_probability = saturna.predict(transaction_features)
# Check calibration health
stability = saturna.stability_index()
return {
'fraud_probability': float(fraud_probability),
'calibration_stability': stability,
'needs_review': stability < 0.8
}
# Score incoming transactions
for transaction in incoming_stream:
result = score_transaction(transaction.features)
if result['fraud_probability'] > 0.7:
flag_for_review(transaction)
if result['needs_review']:
alert_mlops_team()
# Daily health check
def daily_calibration_check():
stability = saturna.stability_index()
status = saturna.status()
print(f"Calibration stability: {stability:.2f}")
print(f"Status: {status}")
if stability < 0.7:
# Consider increasing lambda_blend
saturna.lambda_blend = min(0.5, saturna.lambda_blend + 0.1)
print(f"Increased drift correction to {saturna.lambda_blend}")
if stability < 0.5:
# Manual review needed
send_alert("Fraud model calibration critically low")
Learn how to eliminate jerkiness and oscillation in robot motion using Command Dosage Control.
When you send a position command to a robot motor, the traditional approach sends the full target instantly. This causes:
Command Dosage Control injects commands gradually, creating smooth trajectories:
from thalosforge import DosageController
import time
# Initialize controller at 100 Hz
controller = DosageController(
update_rate=100,
strategy="scurve" # S-curve for smooth acceleration
)
# Set safety limits
controller.set_limits(
max_velocity=1.0, # m/s
max_acceleration=2.0, # m/s²
max_jerk=5.0 # m/s³
)
# Target position (instant command from high-level controller)
target = [1.5, -0.3, 0.8] # XYZ in meters
# dose() returns a generator of smooth intermediate commands
for step in controller.dose(target):
# Send smooth command to robot
robot.send_command(step)
# Wait for next control cycle
time.sleep(0.01) # 100 Hz = 10ms
print("Motion complete - smooth arrival at target")
# Linear ramp (simplest)
controller_linear = DosageController(
update_rate=100,
strategy="linear"
)
# S-curve (smooth acceleration)
controller_scurve = DosageController(
update_rate=100,
strategy="scurve"
)
# Exponential (soft settling)
controller_exp = DosageController(
update_rate=100,
strategy="exponential"
)
# Adaptive (learns from robot response)
controller_adaptive = DosageController(
update_rate=100,
strategy="adaptive"
)
# Emergency stop
if detect_collision():
controller.emergency_stop()
# Oscillation detection
if controller.detect_oscillation():
print("Warning: Oscillation detected")
# Controller automatically applies damping
# Get planned trajectory for visualization
trajectory = controller.get_trajectory()
plot_trajectory(trajectory)