Record: Sliding Window Eval, 2048 Vocab Size, fp16 embeddings, SWA, NorMuon, FA3; mean_val_bpb:1.160#122
Record: Sliding Window Eval, 2048 Vocab Size, fp16 embeddings, SWA, NorMuon, FA3; mean_val_bpb:1.160#122mtybadger wants to merge 2 commits intoopenai:mainfrom
Conversation
Community Review — Record: Sliding Window Eval, 2048 Vocab Size, fp16 embeddings, SWA, NorMuon, FA3; mean_val_bpb:1.160BPB: 1.160 | Compliance: LOOKS CLEAN — pure-neural submission, no TTT/SLOT/n-gram-cache What I found in the code (head SHA Static code review found no TTT adaptation function, no SLOT optimization loop, no n-gram-cache class, and no pre-quant val-token fine-tune. The eval path uses the standard sliding-window stride-64 pattern. The submission is a pure-neural architecture iteration on the standard SP1024/SP4096/SP8192 baseline. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.02s, dim=512, layers=9, vocab=1024, code=63530 B, SMOKE_TEST_PASS Verdict: LOOKS CLEAN. Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending the usual record-track checks (3-seed validation, under-16MB artifact cap, ≤600s train + ≤600s eval on 8×H100 SXM). No compliance flags from the classification pass — this looks like a clean pure-neural iteration on the standard baseline. Auto-classification caveat: this review was drafted by the AST-based classifier. If there's a non-standard eval mechanism (logit postprocessing, hedge mixing, etc.) that I missed because it's factored into a helper file or a non-standard function name, please flag it and I'll re-run the audit manually. Reviewed by @MatoTeziTanka — The Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.02s, dim=512, layers=9, vocab=1024, code=63530 B, SMOKE_TEST_PASS. Classification via deterministic AST-based |
User flagged that the iter 117b-3 step_avg high values were likely
compile-init not yet amortized. They were right.
step_avg is cumulative (total/N), not per-step. Per-step deltas at
s2-s10 were 21-28s (mean ~24s), only ~5% slower than iter 95 baseline
23.5s. 111s s1 compile amortizes to <0.1s/step over 1000 steps.
Also corrected throughput-economics math: at C=8 with E=15, sparse
dispatch handles C*N tokens (8N for balanced routing), not 8*E*N.
That's FEWER than dense 15N -- should be faster, not slower.
Reverting:
- train_gpt.py: deq_bptt_k 4 -> 3 (restore iter 95 baseline)
- hypotheses.md: replace NOT-THROUGHPUT-DELIVERING block with
KILLED-PREMATURELY block, marking relaunch pending
- task openai#118: in_progress (relaunch pending)
- task openai#122 (117b-3b): un-skipped, re-queued
Relaunching iter 117b-3 next with corrected understanding.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Update 3/20: Added two more 8xH100 runs with
SEED=41,102; the 3-run mean isfinal_w6e16_zstd22_roundtrip_exact val_bpb:1.17007988andfinal_sliding_window_exact val_bpb:1.16027254.Day 2! This record brings the ideas from my last work (#78), which was high vocab size, NorMuon and mixed int6/int8 quantization, up to the frontier by copying a bunch of other people! Specifically, I take the STE and SWA ideas from @vmfunc (#89), the sliding window eval with seqlen=1024 and stride=64 from #50 @mattqlf, and #65 @aquariouseworkman, and the Momentum/LR tuning from #52 @spokane-way, #61 @saml212. I also use FA3, which decreases step time by about 10ms - a total free lunch! N.b. I'm not sure if importing the FA3 library violates the 16MB code requirement, since it's been unclear as of now. I expect that in the spirit of the competition, used kernels should be part of the 16MB limit, and so I'm working on bringing FA3 kernels into the record folder as they do over on
modded-nanogpt.The tradeoffs are getting tough. I'm sticking to my guns in losing a layer for higher vocab size, and I think everyone else is right that keeping embeddings in fp16 reduces the quant gap, which means I had to take my vocab size down to compensate. It's really a question about whether we want more diversity in vocab or more resolution in representation, and I think there's a better optimum in between yet to find.
Changes in this model from baseline:
./data/download_hf_docs_and_tokenize.py --output-root ./data --tokenizer-config ./data/tokenizer_specs.json --max-train-tokens 8000000000 --tokenizer-train-docs 100000, for a 50/50 val/train split. Tokenizers for sp1024, 2048, 4096 and 8192 with data available on my huggingfacemodded-nanogpt, replacing MuonConfiguration:
step_avg:43.67msandfinal_int8_zlib_roundtrip_exact val_bpb:1.22731147immediately before.Command:
Key metrics (from
train.log):11132/20000steps due to the wallclock cap, which is further than before!.val_loss:2.3953 val_bpb:1.1670val_loss:2.3982 val_bpb:1.1684 eval_time:1324msval_loss:2.3780 val_bpb:1.1585 eval_time:205575mstrain_time:600081ms step_avg:53.91ms15289740 bytes63530 bytes15353270 bytesTraining volume:
524288tokens/step7224688640Included files:
train_gpt.py(code snapshot used for the run)train.log(exact remote training log)submission.json(leaderboard metadata)