O(n) attention is deception. Shared kernel for predictive descendants that want reusable memory and readout primitives without inheriting one runtime’s policy.
decepticons extracts reusable model mechanisms from a broader
experiment family so downstream systems can specialize without forking the
kernel itself.
decepticons provides the mechanism layer:
decepticons.causal_bankIt is intentionally not a full runtime system:
That work belongs in descendants such as chronohorn.
python3 -m pip install -e .
Quick start:
python3 -m venv .venv
source .venv/bin/activate
pip install -e .
python3 examples/quickstart.py
decepticons fit --input ./corpus.txt --prompt "predictive " --generate 80
from decepticons import ByteCodec, ByteLatentPredictiveCoder
text = "predictive coding likes repeated structure.\n" * 64
model = ByteLatentPredictiveCoder()
fit_report = model.fit(text)
prompt = ByteCodec.encode_text("predictive ")
sample = model.generate(prompt, steps=40, greedy=True)
print(fit_report.train_bits_per_byte)
print(ByteCodec.decode_text(sample))
The intended ecosystem split is:
decepticons -> chronohorn -> heinrich
kernel runtime evidence / audit
Ownership is simple:
decepticons
chronohorn
heinrich
What belongs in the kernel:
What does not belong in the kernel:
If a mechanism can be named without reference to a specific descendant and used unchanged by more than one downstream system, it belongs here. Otherwise it stays in the descendant.
substrates
control, controllers, gating, routing, modulation
exact_context, ngram_memory, statistical_backoff
OnlineCausalMemory — runtime n-gram accumulator with 7-feature query interfaceviews, hierarchical_views, linear_views
readouts, experts
causal_bank
substrate_mode, memory_kind, num_blocks, block_mixing_ratio,
block_stride, state_dim, state_impl, num_heads, patch_size, patch_causal_decoder,
num_hemispheres, fast_hemisphere_ratio, fast_lr_mult, local_poly_order,
substrate_poly_order, training_noise, adaptive_regreadout_bandslearnable_substrate_keys() helperlearned_recurrencegated_retention mode where learned matrix memory becomes the primary substratebridge_export, oracle_analysis, teacher_export
runtime, eval, train_eval, artifacts
opc-export contractThis is a research kernel and reference implementation.
The current pressure from chronohorn is O(n) causal-bank architecture search:
10k ablation lanes to separate mechanisms before promotionIt is not:
It exists to keep the shared mechanism layer reusable and legible.
MIT