Skip to content

Record: Vocab 4096 + MLP 3x + Sliding Window Eval (mean val_bpb=1.1642, 3 seeds)#123

Open
saikrishnarallabandi wants to merge 3 commits intoopenai:mainfrom
saikrishnarallabandi:submit-v4096-mlp3x-sw
Open

Record: Vocab 4096 + MLP 3x + Sliding Window Eval (mean val_bpb=1.1642, 3 seeds)#123
saikrishnarallabandi wants to merge 3 commits intoopenai:mainfrom
saikrishnarallabandi:submit-v4096-mlp3x-sw

Conversation

@saikrishnarallabandi
Copy link
Copy Markdown

@saikrishnarallabandi saikrishnarallabandi commented Mar 19, 2026

Summary

mean val_bpb=1.1642 across 3 seeds on 8xH100 | Artifact: ~15.85 MB (under 16MB)

Six improvements stacked on the baseline:

  1. Vocab 4096 — custom SentencePiece BPE tokenizer (included). More bytes per token = better BPB.
  2. 3x MLP expansion (hidden=1536) — enabled by int6 quantization savings.
  3. Int6 STE fake quantization — quant gap only +0.005 BPB.
  4. Seq4096 training — 4x longer context.
  5. SWA — 7 checkpoint average during warmdown.
  6. Sliding window eval (stride=256) — 148s eval time on 8xH100.

3-Seed Validation

Seed val_bpb Artifact
1337 1.1650 15,846,785 bytes
42 1.1640 15,846,550 bytes
7 1.1637 15,846,550 bytes

Mean: 1.1642, Std: 0.0007, p < 0.0001 (one-sample t-test vs baseline 1.2244)

Command

VOCAB_SIZE=4096 NUM_LAYERS=8 TRAIN_SEQ_LEN=4096 MLP_MULT=3 \
WARMDOWN_ITERS=3000 WEIGHT_QUANTIZATION_BITS=6 EMBED_QUANTIZATION_BITS=8 \
EVAL_STRIDE=256 SWA_ENABLED=1 MATRIX_LR=0.02 SCALAR_LR=0.02 TIED_EMBED_LR=0.03 \
MUON_MOMENTUM=0.99 MUON_MOMENTUM_WARMUP_START=0.92 MUON_MOMENTUM_WARMUP_STEPS=1500 \
MAX_WALLCLOCK_SECONDS=600 \
DATA_PATH=./data/datasets/fineweb10B_sp4096 \
TOKENIZER_PATH=./data/tokenizers/fineweb_4096_bpe.model \
torchrun --standalone --nproc_per_node=8 train_gpt.py

Tokenizer and dataset at sproos/parameter-golf-tokenizers.

Files

  • train_gpt.py — self-contained script (1390 lines)
  • train.log + train_seed1337.log, train_seed42.log, train_seed7.log
  • submission.json, fineweb_4096_bpe.model

vijayavedartham and others added 2 commits March 19, 2026 14:36
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@saikrishnarallabandi saikrishnarallabandi changed the title Record: Vocab 4096 + MLP 3x + Sliding Window Eval (val_bpb=1.1655) Record: Vocab 4096 + MLP 3x + Sliding Window Eval (mean val_bpb=1.1642, 3 seeds) Mar 19, 2026
@MatoTeziTanka
Copy link
Copy Markdown

Community Review — Record: Vocab 4096 + MLP 3x + Sliding Window Eval (mean val_bpb=1.1642, 3 seeds)

BPB: 1.1642 | Compliance: LOOKS CLEAN — pure-neural submission, no TTT/SLOT/n-gram-cache

What I found in the code (head SHA 9b03d736ab45, file records/track_10min_16mb/2026-03-19_SlidingWindowEval/train_gpt.py):

Static code review found no TTT adaptation function, no SLOT optimization loop, no n-gram-cache class, and no pre-quant val-token fine-tune. The eval path uses the standard sliding-window stride-64 pattern. The submission is a pure-neural architecture iteration on the standard SP1024/SP4096/SP8192 baseline.

CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.04s, dim=512, layers=9, vocab=1024, code=58337 B, SMOKE_TEST_PASS

Verdict: LOOKS CLEAN.

Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending the usual record-track checks (3-seed validation, under-16MB artifact cap, ≤600s train + ≤600s eval on 8×H100 SXM). No compliance flags from the classification pass — this looks like a clean pure-neural iteration on the standard baseline.

Auto-classification caveat: this review was drafted by the AST-based classifier. If there's a non-standard eval mechanism (logit postprocessing, hedge mixing, etc.) that I missed because it's factored into a helper file or a non-standard function name, please flag it and I'll re-run the audit manually.


Reviewed by @MatoTeziTankaThe Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.04s, dim=512, layers=9, vocab=1024, code=58337 B, SMOKE_TEST_PASS. Classification via deterministic AST-based classify_prs.py (pattern bank derived from ~65 manually-reviewed PRs earlier in the 2026-04-11 sweep). This review was auto-drafted from a template and spot-checked before posting — if the template misread your code, please call it out so I can iterate the classifier.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants