decepticons

Decepticons

Decepticons

O(n) attention is deception. Shared kernel for predictive descendants that want reusable memory and readout primitives without inheriting one runtime’s policy.

decepticons extracts reusable model mechanisms from a broader experiment family so downstream systems can specialize without forking the kernel itself.

What It Does

decepticons provides the mechanism layer:

It is intentionally not a full runtime system:

That work belongs in descendants such as chronohorn.

Install

python3 -m pip install -e .

Quick start:

python3 -m venv .venv
source .venv/bin/activate
pip install -e .
python3 examples/quickstart.py

CLI

decepticons fit --input ./corpus.txt --prompt "predictive " --generate 80

Python

from decepticons import ByteCodec, ByteLatentPredictiveCoder

text = "predictive coding likes repeated structure.\n" * 64
model = ByteLatentPredictiveCoder()
fit_report = model.fit(text)

prompt = ByteCodec.encode_text("predictive ")
sample = model.generate(prompt, steps=40, greedy=True)

print(fit_report.train_bits_per_byte)
print(ByteCodec.decode_text(sample))

Architecture

The intended ecosystem split is:

decepticons -> chronohorn -> heinrich
kernel                 runtime       evidence / audit

Ownership is simple:

Kernel Boundary

What belongs in the kernel:

What does not belong in the kernel:

If a mechanism can be named without reference to a specific descendant and used unchanged by more than one downstream system, it belongs here. Otherwise it stays in the descendant.

Modules

Docs

Scope

This is a research kernel and reference implementation.

The current pressure from chronohorn is O(n) causal-bank architecture search:

It is not:

It exists to keep the shared mechanism layer reusable and legible.

License

MIT