Create a DlmLossFn that adds Bayesian prior penalties to the Kalman deviance (−2·logL).
The returned function computes: objective(θ) = deviance(θ) + Σ [−2 · log prior(paramᵢ)]
objective(θ) = deviance(θ) + Σ [−2 · log prior(paramᵢ)]
Prior penalties (dropping constants that don't affect optimisation):
penalty = 4(α+1)·log(σ) + 2β/σ²
penalty = (φ − μ)² / σ_p²
All operations are AD-safe (np.split, np.log, np.multiply, etc.) and compose inside jit() with zero overhead.
np.split
np.log
np.multiply
jit()
import { dlmMLE, dlmPrior } from 'dlm-js';const prior = dlmPrior({ obsVar: { shape: 2, rate: 100 }, processVar: { shape: 2, rate: 10 },});const result = await dlmMLE(y, { order: 1, loss: prior });// result.priorPenalty > 0 Copy
import { dlmMLE, dlmPrior } from 'dlm-js';const prior = dlmPrior({ obsVar: { shape: 2, rate: 100 }, processVar: { shape: 2, rate: 10 },});const result = await dlmMLE(y, { order: 1, loss: prior });// result.priorPenalty > 0
Create a DlmLossFn that adds Bayesian prior penalties to the Kalman deviance (−2·logL).
The returned function computes:
objective(θ) = deviance(θ) + Σ [−2 · log prior(paramᵢ)]Prior penalties (dropping constants that don't affect optimisation):
penalty = 4(α+1)·log(σ) + 2β/σ²penalty = (φ − μ)² / σ_p²All operations are AD-safe (
np.split,np.log,np.multiply, etc.) and compose insidejit()with zero overhead.