[Record Submission] - 1.1539: 74.3M Ternary U-Net Transformer (v2 - Continuation from #PR640)#920
Conversation
…10L_UNet_INT4FP8QAT_Brotli directory as it is part of the other PR
Community Review — [Record Submission] - 1.1539: 74.3M Ternary U-Net Transformer (v2 - Continuation from #PR640)BPB: 1.1539 | Compliance: LOOKS CLEAN — pure-neural submission, no TTT/SLOT/n-gram-cache What I found in the code (head SHA Static code review found no TTT adaptation function, no SLOT optimization loop, no n-gram-cache class, and no pre-quant val-token fine-tune. The eval path uses the standard sliding-window stride-64 pattern. The submission is a pure-neural architecture iteration on the standard SP1024/SP4096/SP8192 baseline. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.02s, dim=512, layers=16, vocab=1024, code=77197 B, SMOKE_TEST_PASS Verdict: LOOKS CLEAN. Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending the usual record-track checks (3-seed validation, under-16MB artifact cap, ≤600s train + ≤600s eval on 8×H100 SXM). No compliance flags from the classification pass — this looks like a clean pure-neural iteration on the standard baseline. Auto-classification caveat: this review was drafted by the AST-based classifier. If there's a non-standard eval mechanism (logit postprocessing, hedge mixing, etc.) that I missed because it's factored into a helper file or a non-standard function name, please flag it and I'll re-run the audit manually. Reviewed by @MatoTeziTanka — The Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.02s, dim=512, layers=16, vocab=1024, code=77197 B, SMOKE_TEST_PASS. Classification via deterministic AST-based |
Record: 1.1539 BPB - 74.3M Ternary U-Net Transformer v2
Continuation of #640 - BF16 scale storage + EMBED_DIM=312
val_bpb: 1.1539 (3-seed mean sliding, std 0.0004) | 15.95 MB max artifact | 8xH100 SXM, 600s
What changed from #640
Two serialization/embedding improvements, same architecture and training recipe:
BF16 scale storage - ternary dequantization scales changed from FP16 to BF16. Zero additional bytes (both are 2B per value). BF16's wider exponent range eliminates magnitude rounding that gets amplified by the shrinkage correction factor
1/(1-zero_frac)at high zero fractions. Reduces RT gap from 0.0021 to 0.0011.EMBED_DIM 254 -> 312 - the BF16 fix plus RMS scale research confirmed headroom in the artifact budget. Increasing the embedding bottleneck from 254 to 312 adds 0.6M parameters exclusively in the FP8 path (+55KB compressed), improving representation quality monotonically. 312 is the largest multiple of 8 that fits within 16MB.
Additionally,
WARMDOWN_FRACTIONwas adjusted from 0.2 to 0.15 based on extended training experiments.Everything else is identical to #640: 10L 768d, BitNet b1.58 ternary, relu2 4x MLP, GQA 8/4 heads, U-Net skips, YaRN 2048, Muon+AdamW, poly5 softcap, FP8 QAT, Base-3 LZMA, stride-16 sliding eval, T=0.90.
Results (3 seeds, 8xH100 SXM)
Comparison vs #640
Setup and Run
Full run command
Compliance