Skip to content

Record: 11L Int5-All + XSA5 + EMA + 10% Pruning (val_bpb=1.1466)#389

Open
trasnake87 wants to merge 1 commit intoopenai:mainfrom
trasnake87:submission/11L-int5-xsa-ema
Open

Record: 11L Int5-All + XSA5 + EMA + 10% Pruning (val_bpb=1.1466)#389
trasnake87 wants to merge 1 commit intoopenai:mainfrom
trasnake87:submission/11L-int5-xsa-ema

Conversation

@trasnake87
Copy link
Copy Markdown

11L Int5-All + XSA5 + EMA + 10% Pruning

val_bpb = 1.1466 (sliding window stride=64) | 14.8 MB artifact | 8xH100 SXM, 600s

What's new

Two novel modifications on top of the XSA+EMA stack (PR #315):

  1. Uniform int5 quantization for both MLP and attention weights. Prior SOTA uses int5 for MLP and int6 for attention. Through systematic post-training quantization search, we found attention weights tolerate int5 as well as MLP weights. Saves ~1MB compressed.

  2. 10% magnitude pruning after EMA averaging, before quantization. Zeroes the smallest weights, which compress to nearly nothing under zstd. Saves ~500KB.

Combined: artifact drops from ~15.6MB to 14.8MB with minimal quality impact.

Results (seed 42, 8xH100 SXM)

Metric Value
Steps 5,987
Pre-quant val_bpb 1.1504
Post-quant standard 1.1703
Post-quant sliding s64 1.1466
Artifact 14,811,335 bytes

Run

torchrun --standalone --nproc_per_node=8 train_gpt.py

All defaults baked into the script. No env vars needed.

Built on PR #315 by @alertcat.

@ElliotSlusky
Copy link
Copy Markdown

not a record

rarce added a commit to rarce/parameter-golf that referenced this pull request Mar 23, 2026
Previous run: 18 MB artifact with int6 only on middle layers.
Fix: apply int6 step=4 rounding to ALL block and VE weights (not just
layers 3-7). Additionally prune smallest 10% of weights to zero for
better zstd compression. PR openai#389 validates this approach (~500KB savings).

Expected: 18 MB → ~15 MB (int6-all saves ~1.5 MB, pruning saves ~500KB).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@MatoTeziTanka
Copy link
Copy Markdown

Community Review — Record: 11L Int5-All + XSA5 + EMA + 10% Pruning (val_bpb=1.1466)

BPB: 1.1466 | Compliance: LOOKS CLEAN — pure-neural submission, no TTT/SLOT/n-gram-cache

What I found in the code (head SHA 2a18890025a7, file records/track_10min_16mb/2026-03-22_11L_Int5All_XSA5_EMA_Prune10/train_gpt.py):

Static code review found no TTT adaptation function, no SLOT optimization loop, no n-gram-cache class, and no pre-quant val-token fine-tune. The eval path uses the standard sliding-window stride-64 pattern. The submission is a pure-neural architecture iteration on the standard SP1024/SP4096/SP8192 baseline.

CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.05s, dim=512, layers=11, vocab=1024, code=69722 B, SMOKE_TEST_PASS

Verdict: LOOKS CLEAN.

Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending the usual record-track checks (3-seed validation, under-16MB artifact cap, ≤600s train + ≤600s eval on 8×H100 SXM). No compliance flags from the classification pass — this looks like a clean pure-neural iteration on the standard baseline.

Auto-classification caveat: this review was drafted by the AST-based classifier. If there's a non-standard eval mechanism (logit postprocessing, hedge mixing, etc.) that I missed because it's factored into a helper file or a non-standard function name, please flag it and I'll re-run the audit manually.


Reviewed by @MatoTeziTankaThe Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.05s, dim=512, layers=11, vocab=1024, code=69722 B, SMOKE_TEST_PASS. Classification via deterministic AST-based classify_prs.py (pattern bank derived from ~65 manually-reviewed PRs earlier in the 2026-04-11 sweep). This review was auto-drafted from a template and spot-checked before posting — if the template misread your code, please call it out so I can iterate the classifier.

@trasnake87
Copy link
Copy Markdown
Author

Thanks for the thorough review, @MatoTeziTanka! Confirming the submission is pure-neural — no TTT, no SLOT, no n-gram cache, and the eval path is the standard sliding-window stride-64 pattern.

On the 3-seed validation requirement: we ran seed=42 and got val_bpb=1.1466 on 8×H100 SXM within the 10min + 10min budget. We'd be happy to run seeds 1337 and 2024 for statistical significance if maintainers want to consider this as a record candidate — just let us know.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants