Record: SP8192 + Muon 0.97 + Legal Score-First TTT — val_bpb 1.07983 (3-seed mean)#1514
Merged
cocohearts merged 1 commit intoopenai:mainfrom Apr 29, 2026
Merged
Conversation
…val_bpb 1.07983 3-seed mean val_bpb 1.07983 (std 0.00050) on the PR openai#1394 sp8192 stack. Changes from PR openai#1394 + PR openai#1413 baseline: - Muon momentum = 0.97 (vs 0.99 default), warmup 0.92→0.97 unchanged - Causal token n-gram tilt (base_beta=2.0, agree_bonus=0.1) on top of legal score-first TTT; within-word and word-start experts explicitly disabled (within_beta=0, word_beta=0) because they cannot be made fully causal. - 3-seed verification (seeds 0/42/1234) Seeds: - seed 0 → 1.07928 bpb / 2.78790 nats / 15,993,346 bytes - seed 42 → 1.07997 bpb / 2.78967 nats / 15,992,995 bytes - seed 1234 → 1.08025 bpb / 2.79039 nats / 15,994,604 bytes - mean → 1.07983 bpb / 2.78932 nats / 15,993,648 bytes Delta vs current merged SOTA PR openai#1493 (1.0810): 0.00117 bpb / 0.00302 nats per token Credits: @clarkkev (base PR openai#1394 sp8192 stack), @abaybektursun (n-gram tilt kernel PR openai#1420, causal fix applied), prior legal-TTT precedent PR openai#549 / PR openai#461. Platform: 8xH100 80GB SXM, PyTorch 2.9.1+cu128. Training 588s, eval <437s per seed, both under the 600s budget. Artifact under 16 MB on all 3 seeds.
resouer
added a commit
to resouer/parameter-golf
that referenced
this pull request
Apr 10, 2026
resouer
added a commit
to resouer/parameter-golf
that referenced
this pull request
Apr 10, 2026
resouer
added a commit
to resouer/parameter-golf
that referenced
this pull request
Apr 10, 2026
resouer
added a commit
to resouer/parameter-golf
that referenced
this pull request
Apr 10, 2026
resouer
added a commit
to resouer/parameter-golf
that referenced
this pull request
Apr 10, 2026
dexhunter
added a commit
to dexhunter/parameter-golf
that referenced
this pull request
Apr 11, 2026
…g + Muon 0.97 — val_bpb 1.07747 (3-seed mean) - 3-seed mean: 1.07747 BPP (std 0.00064) / 2.78321 nats - ~15.99 MB artifact, 8×H100 SXM, 600s - VarLen attention (within-document only), doc-independent LoRA TTT - Parameter banking + triple depth recurrence + parallel residuals - PyTorch MLP fallback (no Triton/CUTLASS dependency) - Based on PR openai#1530, PR openai#1523, PR openai#1514
7 tasks
TanishGudise
pushed a commit
to TanishGudise/parameter-golf
that referenced
this pull request
May 1, 2026
Both experts gate on properties of the token being scored (target_i): - within-doc C gate: !is_boundary[target_i] && !is_new_word[target_i] → within_valid[i]=1 only when target is a within-word continuation - word-start Python gate: starts_new_word_lut[target_i] → top_prob[i]>0 only when target IS a word-start token Both violate C1 causality (hint for position i depends on realized token i). Token expert is legal: output computed from prefix state [0..i-1] before tok_i is consumed (token_push runs after ctx_tbl lookup in C process_chunk). Fix: within_tau=99.0, word_tau=99.0, within_boost=0.0, word_boost=0.0 as defaults so both gates are always False. Token-only is the legal subset per PR openai#1514 merge precedent. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
TanishGudise
added a commit
to TanishGudise/parameter-golf
that referenced
this pull request
May 1, 2026
Recent sweep logs (named): S55: token-only ngram tilt baseline = 1.05814 (legal per PR openai#1514) S56: + 3 openai#2060 levers = 1.05790 (-0.00024) S57: + AsymLogit only = 1.05759 (-0.00055) S58: full stack = 1.05694 single seed (-0.00120, super-additive +0.00041 synergy) S59: S58 + EVAL_SEQ_LEN=3072 + NUM_PHASES=1 + WD=1.0 = 1.05657 single seed, eval 567s S60 OOM: S59 + EMA_DECAY=0.9 + batch=64 = OOM S60 retry: S58 + EMA_DECAY=0.9 + batch=32 = 1.05795 / 832s NON-COMPLIANT S61: S59 + TOKEN_BOOST=3.0 = 1.05678 single seed, eval 501s S62: S58 + NUM_PHASES=2 + WD=2.0 + eval=2816 = 1.05755 Earlier sweep logs (UUID-named): ~83 files covering S15-S54 sprint history. Key findings: - AsymLogit Rescale: 2 trainable scalars (softcap_pos, softcap_neg) give -0.00055 via global TTT polish - Token-only n-gram tilt confirmed legal per PR openai#1514 (within_tau=99, word_tau=99, agree=0) - 3 openai#2060 env-var levers (MATRIX_LR=0.028, LQER_ASYM_GROUP=32, TTT_LORA_LR=8e-5) stack super-additively - EMA_DECAY=0.9 didn't transfer to our base - NUM_PHASES=2 revert costs more pre-quant than it gains in TTT recovery - Discovered val_tokens=47852544 vs canonical 47853343, need EVAL_INCLUDE_TAIL=1 for clean comparison Added .gitignore for final_model.pt (130MB - over GitHub limit), .so binaries, pid files.
TanishGudise
added a commit
to TanishGudise/parameter-golf
that referenced
this pull request
May 1, 2026
…E_OUTSIDE=0 Seed 314: pre-quant 1.06128 / quant 1.06962 / final 1.05701 / eval 571.7s Compliance: ngram_hint_precompute_outside=False, precompute (166.95s) INSIDE timer per PR openai#1514 precedent. Token-only tilt: within_gate=0, word_gate=0 - legal per PR openai#1514. Size 15,943,530 bytes. Single seed beats openai#2014's 3-seed mean (1.05759). Validating seeds 42 and 1234.
TanishGudise
added a commit
to TanishGudise/parameter-golf
that referenced
this pull request
May 1, 2026
Beats PR openai#1855 (merged rank 1, 1.06108) by 0.00438 BPB. Beats PR openai#2014 (best open, 1.05759) by 0.00089 BPB. Beats PR openai#2060 (1.05792) by 0.00122 BPB. Stack: - Token-only n-gram tilt (PR openai#1514 merged precedent, within/word channels disabled) - AsymLogit Rescale (2 trainable scalars adapted by global TTT) - 3 hyperparameter levers from PR openai#2060 (MATRIX_LR=0.028, LQER_ASYM_GROUP=32, TTT_LORA_LR=8e-5) - PHASED_TTT_NUM_PHASES=1 (matches PR openai#2014) - NGRAM_HINT_PRECOMPUTE_OUTSIDE=0 (precompute INSIDE eval timer per PR openai#1514) Compliance: - All seeds eval ≤533.1s (cap 600s, 67-80s margin) - All artifacts ≤15.95MB (cap 16MB) - Token-only n-gram channel (within_gate=0, word_gate=0) - Score-first TTT (per PR openai#402)
leon2k2k2k
added a commit
to leon2k2k2k/parameter-golf
that referenced
this pull request
May 5, 2026
Clarified explanation of the gate's behavior and updated the description of the fix in PR openai#1514.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Builds on @clarkkev's PR #1394 sp8192 stack and our own PR #1413 legal score-first TTT, adding:
within_beta=0,word_beta=0) because they cannot be made fully causal without losing most of the benefit.Results (8×H100 80GB SXM, PyTorch 2.9.1+cu128)
std_bpb = 0.00050, std_nats = 0.00128. All 3 seeds fit the 16 MB artifact cap and complete under 600s train + 600s eval.
Legality
inference_mode()before any gradient update. No chunk is trained on before scoring.Test plan