Skip to content

Record: Sliding Window Eval, 2048 Vocab Size, fp16 embeddings, SWA, NorMuon, FA3; mean_val_bpb:1.160#122

Open
mtybadger wants to merge 2 commits intoopenai:mainfrom
mtybadger:spruce/3.19.2
Open

Record: Sliding Window Eval, 2048 Vocab Size, fp16 embeddings, SWA, NorMuon, FA3; mean_val_bpb:1.160#122
mtybadger wants to merge 2 commits intoopenai:mainfrom
mtybadger:spruce/3.19.2

Conversation

@mtybadger
Copy link
Copy Markdown

@mtybadger mtybadger commented Mar 19, 2026

Update 3/20: Added two more 8xH100 runs with SEED=41,102; the 3-run mean is final_w6e16_zstd22_roundtrip_exact val_bpb:1.17007988 and final_sliding_window_exact val_bpb:1.16027254.

Day 2! This record brings the ideas from my last work (#78), which was high vocab size, NorMuon and mixed int6/int8 quantization, up to the frontier by copying a bunch of other people! Specifically, I take the STE and SWA ideas from @vmfunc (#89), the sliding window eval with seqlen=1024 and stride=64 from #50 @mattqlf, and #65 @aquariouseworkman, and the Momentum/LR tuning from #52 @spokane-way, #61 @saml212. I also use FA3, which decreases step time by about 10ms - a total free lunch! N.b. I'm not sure if importing the FA3 library violates the 16MB code requirement, since it's been unclear as of now. I expect that in the spirit of the competition, used kernels should be part of the 16MB limit, and so I'm working on bringing FA3 kernels into the record folder as they do over on modded-nanogpt.

The tradeoffs are getting tough. I'm sticking to my guns in losing a layer for higher vocab size, and I think everyone else is right that keeping embeddings in fp16 reduces the quant gap, which means I had to take my vocab size down to compensate. It's really a question about whether we want more diversity in vocab or more resolution in representation, and I think there's a better optimum in between yet to find.

Changes in this model from baseline:

  • Vocab size 1024 -> 2048
  • New "sp2048" tokenizer trained using ./data/download_hf_docs_and_tokenize.py --output-root ./data --tokenizer-config ./data/tokenizer_specs.json --max-train-tokens 8000000000 --tokenizer-train-docs 100000, for a 50/50 val/train split. Tokenizers for sp1024, 2048, 4096 and 8192 with data available on my huggingface
  • NorMuon implementation from the original paper, popularized by modded-nanogpt, replacing Muon
  • The weights are quantized row-wise to int6, while the embeddings are kept in fp16. Quantization-aware trained; i.e. the weights are fake-quantized during training to make them more amenable later, and straight-through estimated on bkwd. Many people have done this - fp16 tied embedding + warmdown/LR tuning (val_bpb 1.2197) #42 @chonchiog (first), ArjunAutoResearch: MLP 3x + STE int6 QAT + seq4096 + sliding window. val_bpb 1.1632 #66 @arjun-krishna1
  • FlashAttention 3, thank u tri dao
  • Sliding-window eval! It's a bit of a wheeze but everyone's doing it, and I wanted to see how it played with the rest of my stack. Stride=64, seqlen=1024. ty @mattqlf
  • 3X MLP: hidden dim 1536 (up from 1024). ty @jfprincz
  • SWA (stochastic weight averaging) - collects model checkpoints every 200 steps during the final warmdown phase and averages them. the averaged weights generalize slightly better than the final point estimate. 7 checkpoints averaged in the submitted run. ty @vmfunc!

Configuration:

  • All hyperparams as in default NaiveBaseline except VOCAB_SIZE, TRAIN_SEQ_LEN, WARMDOWN_ITERS and NUM_LAYERS; unfortunately to get the increased vocab size we have to sacrifice a layer.
  • Tested on Hyperbolic 8xH100 setup with SXM5; reproduced baseline with step_avg:43.67ms and final_int8_zlib_roundtrip_exact val_bpb:1.22731147 immediately before.

Command:

NCCL_IB_DISABLE=1 \
RUN_ID=verify_sp2048_w6e16_8gpu \
DATA_PATH=./data/datasets/fineweb10B_sp2048 \
TOKENIZER_PATH=./data/tokenizers/fineweb_2048_bpe.model \
VOCAB_SIZE=2048 \
MAX_WALLCLOCK_SECONDS=600 \
TRAIN_SEQ_LEN=4096 \
NUM_LAYERS=8 \
MLP_MULT=3 \
torchrun --standalone --nproc_per_node=8 ./records/track_10min_16mb/2026-03-19_SWE_VocabSize_NorMuon_Int6/train_gpt.py

Key metrics (from train.log):

  • Timed training stopped at 11132/20000 steps due to the wallclock cap, which is further than before!.
  • Pre-quant eval at stop: val_loss:2.3953 val_bpb:1.1670
  • Post-quant roundtrip eval: val_loss:2.3982 val_bpb:1.1684 eval_time:1324ms
  • Post-quant rountrip sliding window eval: val_loss:2.3780 val_bpb:1.1585 eval_time:205575ms
  • Train time: train_time:600081ms step_avg:53.91ms
  • Serialized model w6e16+zstd22: 15289740 bytes
  • Code size: 63530 bytes
  • Total submission size w6e8+zlib: 15353270 bytes

Training volume:

  • Global batch: 524288 tokens/step
  • Total train tokens seen: 7224688640

Included files:

  • train_gpt.py (code snapshot used for the run)
  • train.log (exact remote training log)
  • submission.json (leaderboard metadata)

@MatoTeziTanka
Copy link
Copy Markdown

Community Review — Record: Sliding Window Eval, 2048 Vocab Size, fp16 embeddings, SWA, NorMuon, FA3; mean_val_bpb:1.160

BPB: 1.160 | Compliance: LOOKS CLEAN — pure-neural submission, no TTT/SLOT/n-gram-cache

What I found in the code (head SHA 63c36ab5f1b4, file records/track_10min_16mb/2026-03-19_SWE_VocabSize_NorMuon_Int6/train_gpt.py):

Static code review found no TTT adaptation function, no SLOT optimization loop, no n-gram-cache class, and no pre-quant val-token fine-tune. The eval path uses the standard sliding-window stride-64 pattern. The submission is a pure-neural architecture iteration on the standard SP1024/SP4096/SP8192 baseline.

CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.02s, dim=512, layers=9, vocab=1024, code=63530 B, SMOKE_TEST_PASS

Verdict: LOOKS CLEAN.

Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending the usual record-track checks (3-seed validation, under-16MB artifact cap, ≤600s train + ≤600s eval on 8×H100 SXM). No compliance flags from the classification pass — this looks like a clean pure-neural iteration on the standard baseline.

Auto-classification caveat: this review was drafted by the AST-based classifier. If there's a non-standard eval mechanism (logit postprocessing, hedge mixing, etc.) that I missed because it's factored into a helper file or a non-standard function name, please flag it and I'll re-run the audit manually.


Reviewed by @MatoTeziTankaThe Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.02s, dim=512, layers=9, vocab=1024, code=63530 B, SMOKE_TEST_PASS. Classification via deterministic AST-based classify_prs.py (pattern bank derived from ~65 manually-reviewed PRs earlier in the 2026-04-11 sweep). This review was auto-drafted from a template and spot-checked before posting — if the template misread your code, please call it out so I can iterate the classifier.

jzmyres pushed a commit to jzmyres/parameter-golf that referenced this pull request May 3, 2026
User flagged that the iter 117b-3 step_avg high values were likely
compile-init not yet amortized. They were right.

step_avg is cumulative (total/N), not per-step. Per-step deltas at
s2-s10 were 21-28s (mean ~24s), only ~5% slower than iter 95 baseline
23.5s. 111s s1 compile amortizes to <0.1s/step over 1000 steps.

Also corrected throughput-economics math: at C=8 with E=15, sparse
dispatch handles C*N tokens (8N for balanced routing), not 8*E*N.
That's FEWER than dense 15N -- should be faster, not slower.

Reverting:
  - train_gpt.py: deq_bptt_k 4 -> 3 (restore iter 95 baseline)
  - hypotheses.md: replace NOT-THROUGHPUT-DELIVERING block with
    KILLED-PREMATURELY block, marking relaunch pending
  - task openai#118: in_progress (relaunch pending)
  - task openai#122 (117b-3b): un-skipped, re-queued

Relaunching iter 117b-3 next with corrected understanding.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants