Record: 11L Backout + Int6 + SWA (val_bpb: 1.1364)#339
Record: 11L Backout + Int6 + SWA (val_bpb: 1.1364)#339sheeki03 wants to merge 1 commit intoopenai:mainfrom
Conversation
Adds Backout Connection — learned residual subtraction from mid-network hidden state. Improves val_bpb by 0.0071 over PR openai#198 baseline with zero additional matrix parameters (one learned scalar). val_bpb: 1.1364 (sliding window, stride=64) Artifact: 16,170,051 bytes (170KB over cap, fixable with INT5_MLP=1) Hardware: 8xH100 SXM, 600s wallclock Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Three low-risk additions: - Memory Tokens (64 learnable embeddings, -0.014 A/B, PR openai#352) - Backout Connection (learned mid-layer subtraction, -0.007, PR openai#339) - Tight SWA (scale<0.2, every 50, replacing EMA. PR openai#374) Bugs found and fixed during review: - memory_tokens/backout_lambda not in optimizer groups (code review) - memory_tokens appended to embed_params AFTER optimizer creation (/refine) - Dead encoder-loop h_mid check removed Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Community Review — Record: 11L Backout + Int6 + SWA (val_bpb: 1.1364)BPB: 1.1364 | Compliance: LOOKS CLEAN — pure-neural submission, no TTT/SLOT/n-gram-cache What I found in the code (head SHA Static code review found no TTT adaptation function, no SLOT optimization loop, no n-gram-cache class, and no pre-quant val-token fine-tune. The eval path uses the standard sliding-window stride-64 pattern. The submission is a pure-neural architecture iteration on the standard SP1024/SP4096/SP8192 baseline. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.03s, dim=512, layers=9, vocab=1024, code=70854 B, SMOKE_TEST_PASS Verdict: LOOKS CLEAN. Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending the usual record-track checks (3-seed validation, under-16MB artifact cap, ≤600s train + ≤600s eval on 8×H100 SXM). No compliance flags from the classification pass — this looks like a clean pure-neural iteration on the standard baseline. Auto-classification caveat: this review was drafted by the AST-based classifier. If there's a non-standard eval mechanism (logit postprocessing, hedge mixing, etc.) that I missed because it's factored into a helper file or a non-standard function name, please flag it and I'll re-run the audit manually. Reviewed by @MatoTeziTanka — The Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.03s, dim=512, layers=9, vocab=1024, code=70854 B, SMOKE_TEST_PASS. Classification via deterministic AST-based |
Record: 11L Backout + Int6 + SWA (val_bpb: 1.1364)
val_bpb: 1.1364 (sliding window, stride=64) | 16.17 MB | 8xH100 SXM, 600s
Known Issue
Artifact is 16,170,051 bytes — 170KB over the 16,000,000 byte cap. The code supports
INT5_MLP=1which switches MLP quantization from int6 to int5, saving 1-2MB. A follow-up run is planned to bring the artifact under the cap.Progress from prior submissions
Note: Our baseline replication of PR #198's config yielded 1.1435 (vs their reported 1.1318), likely due to hardware/driver differences (RunPod community cloud vs dedicated). Relative to our own baseline, Backout improves by -0.0071.
What's new
Backout Connection — A learned residual subtraction from a mid-network hidden state. After the U-Net encoder-decoder forward pass, the model subtracts
lambda * h_midfrom the final representation, wherelambdais a learned scalar (initialized at 0.2) andh_midis the hidden state at layernum_layers // 2.This acts as a learned negative residual that removes redundant mid-network information, sharpening the final representation for the language modeling head. Zero additional matrix parameters — only one learned scalar.
Controlled comparison (same hardware, same run)
Results
Architecture
11 layers, 512 dim, 8 heads / 4 KV heads, MLP 3x, relu-squared, SmearGate, BigramHash(4096), OrthoInit, Muon + AdamW with WD=0.04, SWA, int6 mixed quant + zstd, FA3, seq 2048, sliding window eval stride=64.
Backout layer:
num_layers // 2(layer 5). Lambda: learned scalar, initialized at 0.2.Run command
Hardware
8xH100 SXM 80GB HBM3 (RunPod, EUR-IS-3)
Next steps
INT5_MLP=1to bring artifact under 16MB