Non-record: 15L Depth Recurrence + LeakyReLU² — BI-guided weight tying (val_bpb=1.1360)#857
Conversation
…al_bpb=1.1093) 15 effective layers from 11 unique blocks via BI-guided weight tying. Layers 9-13 share one block (lowest Block Influence scores). 27M unique params, int6+zstd = 15.7MB artifact. 3-seed results: Seed 42: 1.1048 BPB Seed 1337: 1.1092 BPB Seed 2025: 1.1138 BPB Mean: 1.1093 ± 0.0045 Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
- Removed multi-epoch corpus-level TTT (ruled illegal in issue openai#677) - Added legal score-first TTT option (PR#549 pattern) but it hurts this model - Clean result: 15L depth recurrence + LeakyReLU² = 1.1360 BPB (no TTT) - Documented depth-vs-steps tradeoff in README - Draft status: single seed, architecture exploration Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Community Review — Non-record: 15L Depth Recurrence + LeakyReLU² — BI-guided weight tying (val_bpb=1.1360)BPB: 1.1360 | Compliance: LOOKS CLEAN — pure-neural submission, no TTT/SLOT/n-gram-cache What I found in the code (head SHA Static code review found no TTT adaptation function, no SLOT optimization loop, no n-gram-cache class, and no pre-quant val-token fine-tune. The eval path uses the standard sliding-window stride-64 pattern. The submission is a pure-neural architecture iteration on the standard SP1024/SP4096/SP8192 baseline. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.06s, dim=512, layers=11, vocab=1024, code=81374 B, SMOKE_TEST_PASS Verdict: LOOKS CLEAN. Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending the usual record-track checks (3-seed validation, under-16MB artifact cap, ≤600s train + ≤600s eval on 8×H100 SXM). No compliance flags from the classification pass — this looks like a clean pure-neural iteration on the standard baseline. Auto-classification caveat: this review was drafted by the AST-based classifier. If there's a non-standard eval mechanism (logit postprocessing, hedge mixing, etc.) that I missed because it's factored into a helper file or a non-standard function name, please flag it and I'll re-run the audit manually. Reviewed by @MatoTeziTanka — The Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.06s, dim=512, layers=11, vocab=1024, code=81374 B, SMOKE_TEST_PASS. Classification via deterministic AST-based |
Summary
val_bpb: 1.1360 (seed 1337) | 15.87 MB | 8xH100 SXM
Non-record submission exploring BI-guided depth recurrence: using Block Influence scores (ShortGPT) to identify
which layer positions can share weights, enabling 15 effective layers from 11 unique parameter blocks within the
same 16MB budget as standard 11L.
Result
Key Technique: BI-Guided Weight Tying
Depth vs Steps Tradeoff
15L runs at 116ms/step (vs 86ms for 11L) due to 4 extra forward/backward layers. In 600s: ~5170 steps vs ~6975 for
11L. The depth advantage doesn't fully compensate for ~1800 fewer steps in this wallclock-limited setting. At equal
step counts, 15L outperforms 11L.
Architecture
15L (10 unique + 1 shared×5), 512d, 8H/4KV GQA, MLP 3x, LeakyReLU(0.5)², XSA4, Partial RoPE 16/64, LN Scale, VE128,
SmearGate, BigramHash(2048), EMA, SWA, Late QAT, int6+zstd-22, FA3.
Reproduce
Credits
Base: signalrush (PR #374/#414). LeakyReLU²: PR #493, PR #518. Block Influence: ShortGPT (arXiv:2403.03853).