Skip to content

nGPT on the Hypersphere: Making Normalized Transformers Work at 16MB (Research)#1108

Open
DbBested wants to merge 2 commits intoopenai:mainfrom
DbBested:ngpt-research-pr
Open

nGPT on the Hypersphere: Making Normalized Transformers Work at 16MB (Research)#1108
DbBested wants to merge 2 commits intoopenai:mainfrom
DbBested:ngpt-research-pr

Conversation

@DbBested
Copy link
Copy Markdown

@DbBested DbBested commented Mar 30, 2026

Research contribution (not a record submission). I wanted to see if nGPT could be made competitive under Parameter Golf constraints after PR #831 dismissed it. Turns out it can — and the investigation turned up some findings that might be useful to others.

val_bpb: 1.1502 (sliding window, 8xH200, 560s)

Findings

  • PR Research: Why Novel Architectures Fail at 16MB — Throughput-Quantization Co-optimization #831's failure was three init bugs, not architectural incompatibility. Fixing zero-init projections, adding learnable logit scaling, and not normalizing embeddings takes nGPT from 1.69 to 1.27 BPB.
  • torch.compile has a precision compounding bug with sequential L2 normalization. Inductor fuses through float() casts; across 86 normalize calls the bf16 errors compound catastrophically. Fix: wrap normalize in an opaque autograd.Function via allow_in_graph. Generalizes beyond nGPT. (Large numeric divergence for torch compile vs eager in bf16 pytorch/pytorch#168126)
  • Post-dequant renormalization cuts the quantization gap from +0.35 to +0.008 BPB. Three lines of code.
  • nGPT's compression advantage vanishes at full training length — undertrained weights compress well because they're close to orthogonal init, not because of the architecture.
  • Paper design choices (signed alpha, s_z scaling) hurt at 5000 steps — the optimizer doesn't have time to exploit the extra degrees of freedom.
  • TTT is incompatible with renorm dequantization — produces NaN at every LR tested.

Hardware

MIT ORCD cluster, 8xH200 SXM, CUDA 12.4, PyTorch 2.6. 560s wallclock.

Full writeup with code, tables, and ablations in the README.

…t 16MB

Research contribution (not a record submission): full nGPT investigation under
Parameter Golf constraints with novel findings including a torch.compile
precision fix, post-dequant renormalization, and systematic ablation across 15+
configurations.

Best result: 1.1502 BPB (sliding window, 8xH200 SXM, 560s training)
@MatoTeziTanka
Copy link
Copy Markdown

Community Review — nGPT on the Hypersphere: Making Normalized Transformers Work at 16MB (Research)

BPB: 1.1502 | Compliance: LOOKS CLEAN — score-first-per-chunk TTT (legal #1416/#1423 pattern)

What I found in the code (head SHA 673af4d636eb, file records/track_10min_16mb/2026-03-29_nGPT_Hypersphere_L2NormalizeHP/train_gpt.py):

The TTT path at line 1158 implements the score-first-per-chunk pattern: each chunk is scored under torch.no_grad() / inference_mode() before the base_model.train() + SGD adaptation runs on that same chunk, with an is_last_chunk guard so the final chunk gets no adaptation pass. This is the structural shape the legal frontier uses (PRs #1416 erichroepke, #1423 aryanbhosale).

Per Issue #402 and Issue #677, TTT is legal when each token is scored before the adapter updates on it, and that's what the code does here — chunk ci is scored under weights adapted only on chunks 0..ci-1. No prequant_ttt_adapt_adamw(val_tokens, ...) multi-epoch fine-tune, no scored-region SLOT, no target-in-key n-gram cache.

CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 4.91s, dim=512, layers=11, vocab=1024, code=118566 B, SMOKE_TEST_PASS

Verdict: LOOKS CLEAN.

Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending standard checks (3-seed validation, 16MB artifact cap, 10-min wallclock on 8×H100 SXM). The compliance picture matches the legal reference frontier and no flags were raised by the classification pass.

Auto-classification caveat: this review was drafted by the AST-based classifier against a template derived from manually-reviewed cluster PRs (#1420, #1450, #1487, #1541, #1529, #1533, #1518). If I've misread a subtlety in your eval path — e.g., multi-epoch TTT that I mistook for single-pass, or a target-in-key lookup I missed in a helper function — please flag it and I'll re-run the audit manually.


Reviewed by @MatoTeziTankaThe Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 4.91s, dim=512, layers=11, vocab=1024, code=118566 B, SMOKE_TEST_PASS. Classification via deterministic AST-based classify_prs.py (pattern bank derived from ~65 manually-reviewed PRs earlier in the 2026-04-11 sweep). This review was auto-drafted from a template and spot-checked before posting — if the template misread your code, please call it out so I can iterate the classifier.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants