In this page
GreyCat Algebra Library
@library("algebra", "0.0.0");
Numerical computing and machine learning library for GreyCat. Provides statistical profiling, neural networks, clustering, signal processing, pattern detection, and polynomial regression — all running natively within the GreyCat runtime.
Features
- Statistical profiling — multi-dimensional Gaussian analysis (min, max, avg, std, covariance, correlation)
- Neural networks — regression, classification, and autoencoder architectures with Dense, Linear, LSTM layers
- Dimensionality reduction — PCA with automatic best-dimension detection
- Clustering — K-means with mini-batch support and meta-learning
- Signal processing — FFT, frequency analysis, low-pass filtering, extrapolation
- Pattern detection — Euclidean, DTW, FFT, and SAX-based time-series pattern matching
- Polynomial regression — curve fitting, prediction, and time-series compression
- Time-series decomposition — aggregate instant data into hourly, daily, weekly, monthly, yearly
- Climate — UTCI (Universal Thermal Climate Index) calculation
Statistical Profiling — GaussianND
Learn statistical properties from multi-dimensional data and apply normalization transforms.
// Create a profiler and learn from data
var profile = GaussianND {};
var data = Tensor {};
data.init(TensorType::f64, Array<int> { 0, 5 }); // [batch, 5 features]
data.append([0.67, -0.20, 0.19, -1.06, 0.46]);
data.append([-0.20, 3.82, -0.13, 1.06, -0.48]);
// ... add more observations
profile.learn(data);
// Access statistics
var avg = profile.avg(); // [5] averages
var std = profile.std(); // [5] standard deviations
var cov = profile.covariance(); // [5x5] covariance matrix
var corr = profile.correlation(); // [5x5] correlation matrix
// Normalize data
var normalized = profile.min_max_scaling(data); // (x - min) / (max - min)
var original = profile.inverse_min_max_scaling(normalized);
var standardized = profile.standard_scaling(data); // (x - avg) / std
var restored = profile.inverse_standard_scaling(standardized);
// Crop to subset of features
var sub_profile = profile.crop(0, 2); // features 0 to 2
API Reference — GaussianND
| Method | Description |
|---|---|
learn(input) |
Learn from a [batch x N] tensor |
avg() |
Returns [N] tensor of dimension averages |
std() |
Returns [N] tensor of standard deviations |
covariance() |
Returns [N x N] covariance matrix |
correlation() |
Returns [N x N] correlation matrix |
dimensions() |
Returns N (number of dimensions) |
clear() |
Reset all state |
min_max_scaling(input) |
Min-max normalization |
inverse_min_max_scaling(input) |
Inverse min-max normalization |
standard_scaling(input) |
Standard scaling (z-score) |
inverse_standard_scaling(input) |
Inverse standard scaling |
crop(from, to) |
Create sub-profile with feature subset |
Dimensionality Reduction — PCA
Identify the most important dimensions using Principal Component Analysis.
// Learn PCA from a GaussianND profile
var profile = GaussianND {};
profile.learn(data);
var pca = PCA {};
pca.learn(profile.correlation()!!, profile.avg()!!, profile.std()!!, 0.95);
// pca.best_dimension now holds the number of dimensions retaining 95% variance
// Set target dimensionality and transform
pca.set_dimension(pca.best_dimension!!);
var reduced = pca.transform(data); // [batch x dim] → [batch x best_dim]
var reconstructed = pca.inverse_transform(reduced); // back to original space
API Reference — PCA
| Method | Description |
|---|---|
learn(correlation, avg, std, threshold?) |
Learn eigenvectors from correlation matrix. Threshold (default 0.95) sets variance retention |
set_dimension(dim) |
Set number of output dimensions |
transform(input) |
Project from N to dim dimensions |
inverse_transform(input) |
Project back from dim to N dimensions |
get_dimension(threshold) |
Get number of dimensions for a given variance threshold |
Neural Networks
High-level API for building, training, and evaluating neural networks.
Regression Network
var inputs = 7;
var outputs = 2;
// Create network
var nn = RegressionNetwork::new(
inputs, outputs, TensorType::f64,
false, // inputs_gradients
0, // fixed_batch_size (0 = dynamic)
42, // seed
);
// Optional preprocessing
nn.setPreProcess(PreProcessType::standard_scaling, inputProfile);
nn.setPostProcess(PostProcessType::standard_scaling, outputProfile);
// Add layers
nn.addDenseLayer(5, true, ComputeActivationRelu {}, null);
nn.addDenseLayer(3, true, ComputeActivationSigmoid {}, null);
nn.addDenseLayer(outputs, true, ComputeActivationRelu {}, null);
// Configure loss and optimizer
nn.setLoss(ComputeRegressionLoss::square, ComputeReduction::auto);
nn.setOptimizer(ComputeOptimizerAdam {});
// Build and compile
var engine = ComputeEngine {};
var model = nn.build(true);
var batchSize = nn.initWithBatch(model, engine, null, batch);
// Training loop
for (var epoch = 0; epoch < 100; epoch++) {
var inputTensor = nn.getInput(engine);
var targetTensor = nn.getTarget(engine);
// fill inputTensor and targetTensor with data...
var loss = nn.train(engine);
// Validation
var valLoss = nn.validation(engine);
}
// Prediction
nn.getInput(engine)?.fill(newData);
var prediction = nn.predict(engine);
Classification Network
var inputs = 10;
var classes = 3;
var nn = ClassificationNetwork::new(
inputs, classes, TensorType::f64,
false, // inputs_gradients
0, // fixed_batch_size
42, // seed
true, // calculate_probabilities
true, // from_logits
false, // has_class_weights
);
nn.addDenseLayer(5, true, ComputeActivationRelu {}, null);
nn.addDenseLayer(classes, true, ComputeActivationSigmoid {}, null);
nn.setLoss(ComputeClassificationLoss::sparse_categorical_cross_entropy, null);
nn.setOptimizer(ComputeOptimizerSgd {});
AutoEncoder Network
var nn = AutoEncoderNetwork::new(
inputs, TensorType::f64,
false, 0, 42,
);
// Encoder layers
nn.addDenseLayer(64, true, ComputeActivationRelu {}, null);
nn.addDenseLayer(16, true, ComputeActivationRelu {}, null); // bottleneck
// Decoder layers
nn.addDenseLayer(64, true, ComputeActivationRelu {}, null);
nn.addDenseLayer(inputs, true, ComputeActivationSigmoid {}, null);
nn.setEncoderLayer(1); // bottleneck layer index
nn.setLoss(ComputeRegressionLoss::square, null);
LSTM Layers
Add LSTM layers for sequence modeling:
var nn = RegressionNetwork::new(inputs, outputs, TensorType::f64, false, 0, 42);
nn.addDenseLayer(5, true, ComputeActivationRelu {}, null);
nn.addLSTMLayer(
6, // output size
3, // number of stacked LSTM layers
10, // sequence length
true, // use_bias
true, // return_sequences
true, // bidirectional
null, // initializer config
);
nn.addLSTMLayer(3, 3, 10, true, false, false, null); // last LSTM: return_sequences=false
nn.addDenseLayer(outputs, true, ComputeActivationRelu {}, null);
Available Components
Activations: Relu, LeakyRelu, Sigmoid, Tanh, Softmax, Softplus, SoftSign, Selu, Elu, Celu, HardSigmoid, Exp
Optimizers:
| Optimizer | Description |
|---|---|
ComputeOptimizerAdam |
Adam (default, lr=0.001) |
ComputeOptimizerSgd |
Stochastic Gradient Descent (lr=0.01) |
ComputeOptimizerRmsProp |
RMSprop |
ComputeOptimizerAdaDelta |
Adadelta |
ComputeOptimizerAdaGrad |
Adagrad |
ComputeOptimizerAdaMax |
Adamax |
ComputeOptimizerNadam |
Nadam |
ComputeOptimizerFtrl |
FTRL |
ComputeOptimizerMomentum |
SGD with momentum |
ComputeOptimizerNesterov |
SGD with Nesterov momentum |
Layer types: Linear, Dense, LSTM, Activation, Filter
Loss functions:
- Regression:
square,abs - Classification:
categorical_cross_entropy,sparse_categorical_cross_entropy
Preprocessing: min_max_scaling, standard_scaling, pca_scaling
Weight initializers: xavier, xavier_uniform, relu, relu_uniform, lecun_uniform, normal, uniform, pytorch, identity, constant, and more
K-Means Clustering
Mini-batch K-means clustering with meta-learning for optimal initialization.
var batchSize = 10;
var features = 6;
var clusters = 3;
var tensorType = TensorType::f64;
var rounds = 10;
// Configure and compile
var engine = ComputeEngine {};
var model = Kmeans::configure(clusters, features, tensorType, true);
engine.compile(model, batchSize);
// Initialize
Kmeans::initialize(engine, 42);
// Training loop
for (var round = 0; round < rounds; round++) {
Kmeans::init_round(engine);
// Feed mini-batches
for (var mb = 0; mb < numBatches; mb++) {
Kmeans::learn(engine, miniBatches[mb]);
}
Kmeans::end_round(engine);
var loss = Kmeans::getSumOfDistances(engine).get(Array<int> { 0 });
}
// Compute statistics
Kmeans::calculate_stats(engine);
var centroids = Kmeans::getClustersCentroids(engine);
var counts = Kmeans::getClustersCounts(engine);
var avgDist = Kmeans::getClustersAvgOfDistances(engine);
var interDist = Kmeans::getClustersDistancesToEachOther(engine);
// Inference on new data
var assignment = Kmeans::cluster(engine, newData);
API Reference — Kmeans
| Method | Description |
|---|---|
Kmeans::configure(clusters, features, type, stats) |
Build compute model |
Kmeans::initialize(engine, seed) |
Initialize engine |
Kmeans::init_round(engine) |
Reset counters for new round |
Kmeans::learn(engine, batch) |
Train on a mini-batch |
Kmeans::end_round(engine) |
Finalize training round |
Kmeans::calculate_stats(engine) |
Compute cluster statistics |
Kmeans::cluster(engine, batch) |
Assign clusters to data |
Kmeans::getClustersCentroids(engine) |
Get centroid positions [K x F] |
Kmeans::getClustersCounts(engine) |
Get sample count per cluster |
Kmeans::getSumOfDistances(engine) |
Get total loss |
Kmeans::getClustersAvgOfDistances(engine) |
Get avg distance per cluster |
Kmeans::getClustersDistancesToEachOther(engine) |
Get inter-cluster distances [K x K] |
Kmeans::sortClusters(engine) |
Sort clusters deterministically |
Kmeans::getInferenceEngine(result, batchSize, stats) |
Create inference engine from trained result |
Signal Processing — FFT
Fast Fourier Transform for frequency analysis, filtering, and extrapolation.
var N = 1000;
var timeseries_complex = Tensor {};
timeseries_complex.init(TensorType::c128, Array<int> { N });
// Fill with a composite sine wave
var freq1 = 5.0;
var freq2 = 7.0;
var t = 0.0;
var dt = 1.0 / (freq1 * 200.0);
for (var i = 0; i < N; i++) {
timeseries_complex.set(Array<int> { i }, sin(2 * MathConstants::pi * freq1 * t)
+ 0.3 * sin(2 * MathConstants::pi * freq2 * t));
t = t + dt;
}
// Forward FFT: time → frequency domain
var frequency_complex = Tensor {};
var fft = FFT::new(N, false);
fft.transform(timeseries_complex, frequency_complex);
// Analyze frequency spectrum
var freq_table = FFT::get_frequency_table(frequency_complex, sampling_step);
// Apply low-pass filter
var filtered = Tensor {};
var cutoff = FFT::get_low_pass_filter_size(frequency_complex, 0.95);
FFT::apply_low_pass_filter(frequency_complex, filtered, cutoff);
// Inverse FFT: frequency → time domain
var fft_inv = FFT::new(N, true);
var reconstructed = Tensor {};
fft_inv.transform(reconstructed, frequency_complex);
// Extrapolation using frequency components
var value = FFT::extrapolate(frequency_complex, sampling_step, start_time, target_time, cutoff);
FFTModel — High-Level Time-Series Analysis
var model = FFTModel::train(myNodeTime, fromTime, toTime);
// Predict a single value
var predicted = model.extrapolate_value(futureTime, 0.95, null);
// Predict a range
var table = model.extrapolate(fromTime, toTime, 0.95, null, null);
API Reference — FFT
| Method | Description |
|---|---|
FFT::new(n, inverse) |
Create FFT engine for n samples |
fft.transform(time, freq) |
Execute forward or inverse FFT |
fft.transform_table(ts, time_c, freq_c) |
Transform from Table, return frequency table |
FFT::get_frequency_table(freq, step) |
Get frequency analysis table |
FFT::get_frequency_spectrum(freq, spec, db, filter) |
Extract spectrum with optional dB conversion |
FFT::apply_low_pass_filter(src, dst, cutoff) |
Apply low-pass filter |
FFT::get_low_pass_filter_size(freq, ratio) |
Get cutoff for desired signal retention ratio |
FFT::extrapolate(freq, step, start, t, filter) |
Predict value at time t |
FFT::extrapolate_table(time_c, step, start, from, to, skip) |
Predict range of values |
FFT::get_next_fast_size(n) |
Get optimal FFT size >= n |
Pattern Detection
Detect recurring patterns in time-series using multiple algorithms.
// Create time series
var ts = nodeTime<float> {};
for (var i = 0; i < 50; i++) {
ts.setAt(time::new(i, DurationUnit::seconds), sin(MathConstants::pi * i / 10));
}
// Create detection engine (Euclidean, DTW, FFT, or SAX)
var engine = EuclideanPatternDetectionEngine::new(ts);
engine.state = PatternDetectionEngineState::new();
// Define reference patterns
engine.addPattern(
time::new(10, DurationUnit::seconds),
time::new(15, DurationUnit::seconds),
);
// Compute similarity scores
engine.initScoring();
engine.computeScores(null);
// Detect matches
engine.detect(PatternDetectionSensitivity {
threshold: 0.0, // minimum score threshold
overlap: 1.0, // allowed overlap ratio
}, null);
// Access results
for (timestamp, detection in engine.state.detections) {
info("Match at ${timestamp}: score=${detection.score}, pattern=${detection.best_pattern}");
}
Available Detectors
| Detector | Description |
|---|---|
EuclideanPatternDetectionEngine |
Euclidean distance-based matching |
DTWPatternDetectionEngine |
Dynamic Time Warping |
FFTPatternDetectionEngine |
FFT-based frequency matching |
SaxPatternDetectionEngine |
Symbolic Aggregate Approximation |
RandomPatternDetectionEngine |
Random baseline (for benchmarking) |
Normalization Modes
| Mode | Description |
|---|---|
as_is |
No normalization |
shift |
Vertical shift alignment |
scaling |
Vertical scaling alignment |
shift_and_scaling |
Both shift and scaling |
Polynomial Regression
Fit polynomial curves for regression and time-series compression.
var N = 6;
var degree = 3;
var X = Tensor {};
X.init(TensorType::f64, Array<int> { N });
var Y = Tensor {};
Y.init(TensorType::f64, Array<int> { N });
for (var i = 0; i < N; i++) {
var x = i * 10.0 + 1000;
X.set(Array<int> { i }, x);
Y.set(Array<int> { i }, 53.0 - 0.0002 * x + 0.00001 * x * x);
}
// Fit polynomial
var poly = Polynomial {};
var maxError = poly.learn(degree, X, Y);
// Predict
var predictions = poly.predict(X);
var singleValue = poly.predictValue(1050.0);
Time-Series Compression
Polynomial::compress(originalTS, polynomialTS, 5, 0.01, 1000);
Polynomial::decompress(originalTS, polynomialTS, 0.01, decompressedTS, errorTS);
API Reference — Polynomial
| Method | Description |
|---|---|
poly.learn(degree, X, Y) |
Fit polynomial of given degree. Returns max error |
poly.predict(X) |
Predict Y values for tensor X |
poly.predictValue(x) |
Predict single Y value |
Polynomial::compress(src, dst, maxDeg, maxErr, bufSize) |
Compress time-series with adaptive polynomial fitting |
Polynomial::decompress(src, poly, maxErr, dst, errTS) |
Decompress and verify |
Linear Solver
var weights = Solver::solve(X, Y); // Solve X * w = Y for w
Time-Series Decomposition
Aggregate instant-level data into coarser time resolutions.
TimeSeriesDecomposition::calculateAll(
instantTS, // source
hourlyTS, // hourly aggregation (nullable)
dailyTS, // daily aggregation (nullable)
weeklyTS, // weekly aggregation (nullable)
monthlyTS, // monthly aggregation (nullable)
yearlyTS, // yearly aggregation (nullable)
TimeZone::Europe_Luxembourg,
null, // lastUpdatedTime (null = full recalculation)
);
Supports incremental updates by passing lastUpdatedTime to recompute only from that point forward.
Climate Utilities
// Calculate Universal Thermal Climate Index
var utci_temp = utci(
25.0, // outdoor air temperature (°C)
3.0, // average wind speed (m/s)
30.0, // mean radiant temperature (°C)
50.0, // relative humidity (%)
);
ComputeEngine — Low-Level API
For advanced use cases, the ComputeEngine provides direct access to computational graphs.
// Define a compute model with custom operations
var model = ComputeModel {
layers: Array<ComputeLayer> {
ComputeLayerCustom {
name: "ops",
vars: Array<ComputeVariable> {
ComputeVarInOut { name: "a", with_grad: false, shape: Array<int> { 3, 2 }, type: TensorType::f64 },
ComputeVarInOut { name: "b", with_grad: false, shape: Array<int> { 3, 2 }, type: TensorType::f64 },
ComputeVar { name: "c" },
},
ops: Array<ComputeOperation> {
ComputeOperationAdd { input: "a", input2: "b", output: "c" },
},
},
},
};
var engine = ComputeEngine {};
engine.compile(model, 10);
engine.configure(true); // forward-only mode
engine.initialize();
// Set inputs and execute
engine.getVar("ops", "a")?.fill(1.5);
engine.getVar("ops", "b")?.fill(2.5);
engine.forward("ops");
var result = engine.getVar("ops", "c"); // 4.0
// State persistence
var state = ComputeState {};
engine.saveState(state);
// ... later ...
engine.loadState(state);
Available Operations
Arithmetic: Add, Sub, Mul, Div, Pow, MatMul, Scale, RaiseToPower, AddBias
Unary math: Abs, Neg, Sqrt, Exp, Log, Sign
Trigonometric: Sin, Cos, Tan, Asin, Acos, Atan, Sinh, Cosh, Tanh
Activation: Relu, LeakyRelu, Sigmoid, Softmax, Softplus, SoftSign, Selu, Elu, Celu, HardSigmoid, LogSoftmax, LeCunTanh
Reduction: Sum, Avg, ArgMin, ArgMax, SumIf, Euclidean
Utility: Fill, Filter, Clip