Non-record submission: Depth Recurrence + Legal Score-First TTT (10L, 1.1532 BPB)#456
Conversation
…10L) - 10-layer GPT with depth recurrence, BigramHash, SmearGate, XSA, U-Net skips - Mixed int5/int6 quantization + zstd-22 compression (15.9MB artifact) - Competition-legal score-first TTT: scores each chunk before training on it - val_bpb: 1.1532 (pre-TTT: 1.1600) - Trained on 4xA100-40GB, 5200 steps, 2283s training + 458s eval
ec44c24 to
f5e802b
Compare
Community Review — Non-record submission: Depth Recurrence + Legal Score-First TTT (10L, 1.1532 BPB)BPB: 1.1532 | Compliance: LOOKS CLEAN — score-first-per-chunk TTT (legal #1416/#1423 pattern) What I found in the code (head SHA The TTT path at line 757 implements the score-first-per-chunk pattern: each chunk is scored under Per Issue #402 and Issue #677, TTT is legal when each token is scored before the adapter updates on it, and that's what the code does here — chunk CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.07s, dim=512, layers=10, vocab=1024, code=66874 B, SMOKE_TEST_PASS Verdict: LOOKS CLEAN. Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending standard checks (3-seed validation, 16MB artifact cap, 10-min wallclock on 8×H100 SXM). The compliance picture matches the legal reference frontier and no flags were raised by the classification pass. Auto-classification caveat: this review was drafted by the AST-based classifier against a template derived from manually-reviewed cluster PRs (#1420, #1450, #1487, #1541, #1529, #1533, #1518). If I've misread a subtlety in your eval path — e.g., multi-epoch TTT that I mistook for single-pass, or a target-in-key lookup I missed in a helper function — please flag it and I'll re-run the audit manually. Reviewed by @MatoTeziTanka — The Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.07s, dim=512, layers=10, vocab=1024, code=66874 B, SMOKE_TEST_PASS. Classification via deterministic AST-based |
Legal Score-First TTT (10L, 1.1532 BPB)
10-layer GPT with competition-legal score-first full-model test-time training,
mixed int5/int6 quantization, and community-standard architecture components.
What's novel
The main contribution is competition-legal full-model TTT integrated into
sliding-window evaluation. Prior legal TTT work (PR #77) used per-document
LoRA adapters with resets. This submission replaces that with a chunked
score-first loop over all 25.5 M parameters — no LoRA, no adapter resets
between documents — giving the model persistent memory across the entire
validation set.
eval_val_sliding_ttt()divides validation into 32 k-token chunks, scoreseach chunk first (satisfying the "already graded" rule), then trains with one
AdamW step per chunk. Cosine LR decay across chunks prevents catastrophic
forgetting. Improvement: 1.1600 → 1.1532 BPB (−0.0068).
Architecture summary
10 layers, d_model=512, 8 heads / 4 KV heads (GQA 2:1), 3× relu² MLP,
BigramHash(10 240), SmearGate, XSA on last 3 layers, U-Net skip connections.
Depth recurrence infrastructure exists in the code but is not active
(
unique_layers = num_layers = 10).Training recipe
Muon + AdamW, lr 0.025/0.035/0.025 (matrices/embeddings/scalars),
786 432 tokens/step, 20 warmup → 3 000 warmdown, SWA from step 4 650,
Late QAT, GPTQ-lite on 75 % of layers, zstd-22 compression.
TTT details
Credits
This submission builds on work from many contributors to the parameter-golf competition:
Built on the parameter-golf starter code by Beren Millidge & Keller Jordan.