Tutorials

Step-by-step guides to help you get the most out of ThalosForge.

Getting Started
Beginner 10 min

Quick Start Guide

Install ThalosForge and run your first optimization in under 10 minutes.

Beginner 15 min

Choosing the Right Optimizer

Learn when to use QuantumJolt, DSS, or Kestrel for your specific problem.

Optimization
Intermediate 20 min

Hyperparameter Optimization

Use QuantumJolt to tune neural network hyperparameters in high-dimensional spaces.

Advanced 30 min

CFD Simulation Optimization

Optimize expensive CFD simulations with DSS using minimal function evaluations.

Intermediate 25 min

Constrained Engineering Design

Solve engineering problems with inequality and equality constraints using Kestrel.

Advanced 35 min

Multi-Objective Optimization

Generate Pareto fronts for problems with competing objectives.

MLOps
Intermediate 25 min

Fraud Detection Model Maintenance

Keep fraud detection models accurate as fraud patterns evolve, without retraining.

Intermediate 20 min

IoT Sensor Calibration

Maintain sensor accuracy as environmental conditions change.

Robotics
Beginner 15 min

Smooth Robot Motion Basics

Eliminate jerkiness in robot motion with Command Dosage Control.

Advanced 40 min

ROS Integration

Integrate Command Dosage Control with ROS for industrial robot applications.

Quick Start Guide

Beginner 10 minutes Optimization

This tutorial will get you from zero to running your first ThalosForge optimization in under 10 minutes.

1Installation

Install ThalosForge using pip:

pip install thalosforge

Verify the installation:

import thalosforge as tf
print(tf.__version__)  # Should print version number

2Define Your Problem

Let's optimize the classic Rastrigin function, a challenging multimodal benchmark:

import numpy as np

def rastrigin(x):
    """Rastrigin function - global minimum at origin"""
    A = 10
    return A * len(x) + np.sum(x**2 - A * np.cos(2 * np.pi * x))

# Test it
print(rastrigin(np.zeros(10)))  # 0.0 (global minimum)
print(rastrigin(np.ones(10)))   # ~10 (not optimal)

3Run Optimization

Use ThalosForge to find the global minimum:

import thalosforge as tf

# Define search bounds (10 dimensions)
bounds = [(-5.12, 5.12)] * 10

# Run optimization
result = tf.optimize(
    rastrigin,
    bounds=bounds,
    method='quantumjolt',
    max_evals=2000
)

# Print results
print(f"Best value found: {result.fun:.6f}")
print(f"Best solution: {result.x}")
print(f"Evaluations used: {result.nfev}")
Expected Output
Best value found: 0.000000 (or very close to 0)
Best solution: [0.0, 0.0, 0.0, ...] (all near zero)
Evaluations used: ~2000

4Compare Methods

Try different optimizers to see how they perform:

# Try DSS (deterministic)
result_dss = tf.optimize(
    rastrigin,
    bounds=bounds,
    method='dss',
    max_evals=2000
)

print(f"DSS result: {result_dss.fun:.6f}")

# Compare with SciPy
from scipy.optimize import differential_evolution

scipy_result = differential_evolution(
    rastrigin,
    bounds,
    maxiter=200,
    seed=42
)

print(f"SciPy result: {scipy_result.fun:.6f}")

5Next Steps

Now that you've run your first optimization, explore these topics:

Hyperparameter Optimization

Intermediate 20 minutes Optimization

This tutorial shows how to use QuantumJolt for high-dimensional hyperparameter optimization of machine learning models.

The Problem

Hyperparameter optimization is challenging because:

Setup

import numpy as np
import thalosforge as tf
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import make_classification
from sklearn.model_selection import cross_val_score

# Generate synthetic data
X, y = make_classification(
    n_samples=1000,
    n_features=20,
    n_informative=10,
    random_state=42
)

Define the Objective

def objective(params):
    """
    Objective function for RF hyperparameter tuning.
    We minimize negative accuracy (to maximize accuracy).
    """
    # Unpack parameters
    n_estimators = int(params[0])
    max_depth = int(params[1])
    min_samples_split = int(params[2])
    min_samples_leaf = int(params[3])
    max_features = params[4]  # Continuous [0.1, 1.0]
    
    # Create and evaluate model
    model = RandomForestClassifier(
        n_estimators=n_estimators,
        max_depth=max_depth,
        min_samples_split=min_samples_split,
        min_samples_leaf=min_samples_leaf,
        max_features=max_features,
        random_state=42,
        n_jobs=-1
    )
    
    # 5-fold cross-validation
    scores = cross_val_score(model, X, y, cv=5, scoring='accuracy')
    
    # Return negative accuracy (we minimize)
    return -scores.mean()

Run Optimization

# Define bounds for each hyperparameter
bounds = [
    (10, 500),      # n_estimators
    (2, 50),        # max_depth
    (2, 20),        # min_samples_split
    (1, 10),        # min_samples_leaf
    (0.1, 1.0),     # max_features
]

# Track progress
history = []
def callback(x, f, iteration):
    history.append(f)
    if iteration % 10 == 0:
        print(f"Iteration {iteration}: accuracy = {-f:.4f}")

# Run QuantumJolt
result = tf.optimize(
    objective,
    bounds=bounds,
    method='quantumjolt',
    max_evals=100,  # Each eval is expensive (CV)
    callback=callback
)

# Extract best hyperparameters
best_params = {
    'n_estimators': int(result.x[0]),
    'max_depth': int(result.x[1]),
    'min_samples_split': int(result.x[2]),
    'min_samples_leaf': int(result.x[3]),
    'max_features': result.x[4],
}

print(f"\nBest accuracy: {-result.fun:.4f}")
print(f"Best parameters: {best_params}")
Tip: Scaling to More Parameters
QuantumJolt excels with 20-100+ hyperparameters where grid search is impossible. For neural networks with learning rate, momentum, dropout rates, layer sizes, etc., QuantumJolt consistently outperforms random search and Bayesian optimization.

Fraud Detection Model Maintenance

Intermediate 25 minutes MLOps

Learn how to use Saturna to keep fraud detection models accurate as fraud patterns evolve—without retraining.

The Challenge

Fraud detection models face a unique challenge: fraudsters constantly change tactics. This causes:

Setup

from thalosforge import Saturna
import numpy as np

# Load your existing ONNX fraud model
saturna = Saturna(
    model_path="fraud_detector.onnx",
    lambda_blend=0.3,    # 30% drift correction
    temperature=0.9      # Soft modulation
)

Initial Calibration

Calibrate Saturna on your baseline (clean) data:

# Load baseline data from model training period
X_baseline = load_baseline_transactions()  # Your data loading function

# Calibrate Saturna
calibration_result = saturna.calibrate(X_baseline)
print(calibration_result)
# {'status': 'calibrated', 'samples_used': 10000}

Production Inference

# In production, use Saturna for all predictions
def score_transaction(transaction_features):
    """Score a transaction with drift correction."""
    
    # Saturna automatically corrects for drift
    fraud_probability = saturna.predict(transaction_features)
    
    # Check calibration health
    stability = saturna.stability_index()
    
    return {
        'fraud_probability': float(fraud_probability),
        'calibration_stability': stability,
        'needs_review': stability < 0.8
    }

# Score incoming transactions
for transaction in incoming_stream:
    result = score_transaction(transaction.features)
    
    if result['fraud_probability'] > 0.7:
        flag_for_review(transaction)
    
    if result['needs_review']:
        alert_mlops_team()

Monitoring

# Daily health check
def daily_calibration_check():
    stability = saturna.stability_index()
    status = saturna.status()
    
    print(f"Calibration stability: {stability:.2f}")
    print(f"Status: {status}")
    
    if stability < 0.7:
        # Consider increasing lambda_blend
        saturna.lambda_blend = min(0.5, saturna.lambda_blend + 0.1)
        print(f"Increased drift correction to {saturna.lambda_blend}")
    
    if stability < 0.5:
        # Manual review needed
        send_alert("Fraud model calibration critically low")
Results
Typical results with Saturna on fraud detection:
• 30-50% reduction in false positives
• 12+ weeks between manual reviews
• $25-100K saved per model annually

Smooth Robot Motion Basics

Beginner 15 minutes Robotics

Learn how to eliminate jerkiness and oscillation in robot motion using Command Dosage Control.

The Problem

When you send a position command to a robot motor, the traditional approach sends the full target instantly. This causes:

The Solution

Command Dosage Control injects commands gradually, creating smooth trajectories:

from thalosforge import DosageController
import time

# Initialize controller at 100 Hz
controller = DosageController(
    update_rate=100,
    strategy="scurve"  # S-curve for smooth acceleration
)

# Set safety limits
controller.set_limits(
    max_velocity=1.0,        # m/s
    max_acceleration=2.0,    # m/s²
    max_jerk=5.0             # m/s³
)

Basic Usage

# Target position (instant command from high-level controller)
target = [1.5, -0.3, 0.8]  # XYZ in meters

# dose() returns a generator of smooth intermediate commands
for step in controller.dose(target):
    # Send smooth command to robot
    robot.send_command(step)
    
    # Wait for next control cycle
    time.sleep(0.01)  # 100 Hz = 10ms

print("Motion complete - smooth arrival at target")

Dosing Strategies

# Linear ramp (simplest)
controller_linear = DosageController(
    update_rate=100,
    strategy="linear"
)

# S-curve (smooth acceleration)
controller_scurve = DosageController(
    update_rate=100,
    strategy="scurve"
)

# Exponential (soft settling)
controller_exp = DosageController(
    update_rate=100,
    strategy="exponential"
)

# Adaptive (learns from robot response)
controller_adaptive = DosageController(
    update_rate=100,
    strategy="adaptive"
)

Safety Features

# Emergency stop
if detect_collision():
    controller.emergency_stop()

# Oscillation detection
if controller.detect_oscillation():
    print("Warning: Oscillation detected")
    # Controller automatically applies damping

# Get planned trajectory for visualization
trajectory = controller.get_trajectory()
plot_trajectory(trajectory)
Next Steps
ROS Integration — Use with industrial robots
API Reference — Complete DosageController documentation