nGPT on the Hypersphere: Making Normalized Transformers Work at 16MB (Research)#1108
nGPT on the Hypersphere: Making Normalized Transformers Work at 16MB (Research)#1108DbBested wants to merge 2 commits intoopenai:mainfrom
Conversation
…t 16MB Research contribution (not a record submission): full nGPT investigation under Parameter Golf constraints with novel findings including a torch.compile precision fix, post-dequant renormalization, and systematic ablation across 15+ configurations. Best result: 1.1502 BPB (sliding window, 8xH200 SXM, 560s training)
Community Review — nGPT on the Hypersphere: Making Normalized Transformers Work at 16MB (Research)BPB: 1.1502 | Compliance: LOOKS CLEAN — score-first-per-chunk TTT (legal #1416/#1423 pattern) What I found in the code (head SHA The TTT path at line 1158 implements the score-first-per-chunk pattern: each chunk is scored under Per Issue #402 and Issue #677, TTT is legal when each token is scored before the adapter updates on it, and that's what the code does here — chunk CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 4.91s, dim=512, layers=11, vocab=1024, code=118566 B, SMOKE_TEST_PASS Verdict: LOOKS CLEAN. Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending standard checks (3-seed validation, 16MB artifact cap, 10-min wallclock on 8×H100 SXM). The compliance picture matches the legal reference frontier and no flags were raised by the classification pass. Auto-classification caveat: this review was drafted by the AST-based classifier against a template derived from manually-reviewed cluster PRs (#1420, #1450, #1487, #1541, #1529, #1533, #1518). If I've misread a subtlety in your eval path — e.g., multi-epoch TTT that I mistook for single-pass, or a target-in-key lookup I missed in a helper function — please flag it and I'll re-run the audit manually. Reviewed by @MatoTeziTanka — The Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 4.91s, dim=512, layers=11, vocab=1024, code=118566 B, SMOKE_TEST_PASS. Classification via deterministic AST-based |
Research contribution (not a record submission). I wanted to see if nGPT could be made competitive under Parameter Golf constraints after PR #831 dismissed it. Turns out it can — and the investigation turned up some findings that might be useful to others.
val_bpb: 1.1502 (sliding window, 8xH200, 560s)
Findings
autograd.Functionviaallow_in_graph. Generalizes beyond nGPT. (Large numeric divergence for torch compile vs eager in bf16 pytorch/pytorch#168126)Hardware
MIT ORCD cluster, 8xH200 SXM, CUDA 12.4, PyTorch 2.6. 560s wallclock.
Full writeup with code, tables, and ablations in the README.