Record: 11L Int5-All + XSA5 + EMA + 10% Pruning (val_bpb=1.1466)#389
Record: 11L Int5-All + XSA5 + EMA + 10% Pruning (val_bpb=1.1466)#389trasnake87 wants to merge 1 commit intoopenai:mainfrom
Conversation
|
not a record |
Previous run: 18 MB artifact with int6 only on middle layers. Fix: apply int6 step=4 rounding to ALL block and VE weights (not just layers 3-7). Additionally prune smallest 10% of weights to zero for better zstd compression. PR openai#389 validates this approach (~500KB savings). Expected: 18 MB → ~15 MB (int6-all saves ~1.5 MB, pruning saves ~500KB). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Community Review — Record: 11L Int5-All + XSA5 + EMA + 10% Pruning (val_bpb=1.1466)BPB: 1.1466 | Compliance: LOOKS CLEAN — pure-neural submission, no TTT/SLOT/n-gram-cache What I found in the code (head SHA Static code review found no TTT adaptation function, no SLOT optimization loop, no n-gram-cache class, and no pre-quant val-token fine-tune. The eval path uses the standard sliding-window stride-64 pattern. The submission is a pure-neural architecture iteration on the standard SP1024/SP4096/SP8192 baseline. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.05s, dim=512, layers=11, vocab=1024, code=69722 B, SMOKE_TEST_PASS Verdict: LOOKS CLEAN. Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending the usual record-track checks (3-seed validation, under-16MB artifact cap, ≤600s train + ≤600s eval on 8×H100 SXM). No compliance flags from the classification pass — this looks like a clean pure-neural iteration on the standard baseline. Auto-classification caveat: this review was drafted by the AST-based classifier. If there's a non-standard eval mechanism (logit postprocessing, hedge mixing, etc.) that I missed because it's factored into a helper file or a non-standard function name, please flag it and I'll re-run the audit manually. Reviewed by @MatoTeziTanka — The Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.05s, dim=512, layers=11, vocab=1024, code=69722 B, SMOKE_TEST_PASS. Classification via deterministic AST-based |
|
Thanks for the thorough review, @MatoTeziTanka! Confirming the submission is pure-neural — no TTT, no SLOT, no n-gram cache, and the eval path is the standard sliding-window stride-64 pattern. On the 3-seed validation requirement: we ran seed=42 and got val_bpb=1.1466 on 8×H100 SXM within the 10min + 10min budget. We'd be happy to run seeds 1337 and 2024 for statistical significance if maintainers want to consider this as a record candidate — just let us know. |
11L Int5-All + XSA5 + EMA + 10% Pruning
val_bpb = 1.1466 (sliding window stride=64) | 14.8 MB artifact | 8xH100 SXM, 600s
What's new
Two novel modifications on top of the XSA+EMA stack (PR #315):
Uniform int5 quantization for both MLP and attention weights. Prior SOTA uses int5 for MLP and int6 for attention. Through systematic post-training quantization search, we found attention weights tolerate int5 as well as MLP weights. Saves ~1MB compressed.
10% magnitude pruning after EMA averaging, before quantization. Zeroes the smallest weights, which compress to nearly nothing under zstd. Saves ~500KB.
Combined: artifact drops from ~15.6MB to 14.8MB with minimal quality impact.
Results (seed 42, 8xH100 SXM)
Run
All defaults baked into the script. No env vars needed.
Built on PR #315 by @alertcat.