11L + Hadamard Rotation + VE128 + cuDNN SDPA (val_bpb: 1.1365, 3-seed mean)#586
11L + Hadamard Rotation + VE128 + cuDNN SDPA (val_bpb: 1.1365, 3-seed mean)#586EaCognitive wants to merge 2 commits intoopenai:mainfrom
Conversation
e32e92a to
82e3375
Compare
Data-free Walsh-Hadamard rotation for int6 quantization. Improves zstd compression from 1.70x to 1.76x, recovering 530KB of artifact headroom that enables Shared Value Embeddings (VE128). No calibration data. No training data access at eval time. No TTT. 3-seed mean: 1.1365 +/- 0.0005 BPB. All artifacts under 16MB. Erick Aleman | EA Cognitive | www.eacognitive.com
82e3375 to
0915466
Compare
|
This submission validates our current approach. Ongoing research has identified several highly promising extensions from recent literature that we're ready to test. GPU grant has been submitted. We welcome any feedback. Thank you. |
Erick Aleman | EA Cognitive | www.eacognitive.com
Community Review — 11L + Hadamard Rotation + VE128 + cuDNN SDPA (val_bpb: 1.1365, 3-seed mean)BPB: 1.1365 | Compliance: LOOKS CLEAN — pure-neural submission, no TTT/SLOT/n-gram-cache What I found in the code (head SHA Static code review found no TTT adaptation function, no SLOT optimization loop, no n-gram-cache class, and no pre-quant val-token fine-tune. The eval path uses the standard sliding-window stride-64 pattern. The submission is a pure-neural architecture iteration on the standard SP1024/SP4096/SP8192 baseline. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.03s, dim=512, layers=11, vocab=1024, code=52493 B, SMOKE_TEST_PASS Verdict: LOOKS CLEAN. Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending the usual record-track checks (3-seed validation, under-16MB artifact cap, ≤600s train + ≤600s eval on 8×H100 SXM). No compliance flags from the classification pass — this looks like a clean pure-neural iteration on the standard baseline. Auto-classification caveat: this review was drafted by the AST-based classifier. If there's a non-standard eval mechanism (logit postprocessing, hedge mixing, etc.) that I missed because it's factored into a helper file or a non-standard function name, please flag it and I'll re-run the audit manually. Reviewed by @MatoTeziTanka — The Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.03s, dim=512, layers=11, vocab=1024, code=52493 B, SMOKE_TEST_PASS. Classification via deterministic AST-based |
11L + Hadamard Rotation + VE128 + cuDNN SDPA
val_bpb: 1.1365 (sliding window stride=64, 3-seed mean, std 0.0005) | ~15.6 MB | 8xH100 SXM, 600s
3-Seed Results
Technique: Data-Free Hadamard Rotation for Int6 Quantization
Walsh-Hadamard rotation applied to weight matrices before int6 per-row quantization. The orthogonal rotation spreads outlier values uniformly, improving zstd compression from 1.70x to 1.76x and reducing quantization gap from 0.0093 to 0.0084 BPB.
This technique is data-free: no calibration samples, no training data access at eval time. The rotation matrix is deterministic from the weight dimension.
No other open or merged PR uses rotation-based quantization.
Compression Enables Architecture
The 0.06x compression improvement recovers 530KB of artifact headroom within the 16MB budget, directly enabling Shared Value Embeddings (VE128 on layers 9-10) which previously overflowed at 44KB headroom.
Ablation
Findings
Architecture
11 layers, 512-dim, 8 heads (4 KV heads, GQA), MLP 3x relu-squared, XSA on last 4 layers, Partial RoPE (16/64), LN Scale, U-Net skip connections, SmearGate, BigramHash(2048), EMA 0.997, cuDNN SDPA. Muon lr=0.025 + AdamW lr=0.035. Warmdown 3500 steps (cosine).
Erick Aleman | EA Cognitive | www.eacognitive.com | github.com/eacognitive