Skip to content

[Record Submission] QAT Int5/Int6 + Backout + U-Net Skips + BigramHash(10240) + SWA50 — val_bpb=1.1477#295

Open
gowtham0992 wants to merge 1 commit intoopenai:mainfrom
gowtham0992:submission-qat-backout
Open

[Record Submission] QAT Int5/Int6 + Backout + U-Net Skips + BigramHash(10240) + SWA50 — val_bpb=1.1477#295
gowtham0992 wants to merge 1 commit intoopenai:mainfrom
gowtham0992:submission-qat-backout

Conversation

@gowtham0992
Copy link
Copy Markdown

Summary

val_bpb: 1.1477 (sliding window, stride=64, post int5/int6 quantization roundtrip)
Artifact size: 15.94 MB

Novel Contributions

  • Quantization-Aware Training (QAT): STE with int5 MLP / int6 attention during training — reduces quant gap from ~0.016 to ~0.005 BPB
  • Backout: Learned residual subtraction (λ·x_mid) from final output
  • U-Net Skip Connections: 5+5 encoder-decoder with learned per-dim skip weights
  • SVD Embedding Init: Spectral decay 1/√k for tied embeddings

Shared Techniques

BigramHash(10240), SmearGate, Simple Muon + compiled NS5, SWA every 50 steps, orthogonal init, 8% magnitude pruning, zstd-22

Run Details

  • 6457 steps in 10 min, 92.9ms/step avg
  • 8xH100 SXM (RunPod)
  • seed=42, single run

See README.md for full details.

@mohosy
Copy link
Copy Markdown

mohosy commented Mar 21, 2026

bigramhash 10240 is interesting, thwu1 uses that too. did you try any sizes in between like 4096 or was it just 2048 vs 10240

@gowtham0992
Copy link
Copy Markdown
Author

bigramhash 10240 is interesting, thwu1 uses that too. did you try any sizes in between like 4096 or was it just 2048 vs 10240

Good question. just went with 10240, didn't try anything in between. will need to iterate on future runs

@MatoTeziTanka
Copy link
Copy Markdown

Community Review — [Record Submission] QAT Int5/Int6 + Backout + U-Net Skips + BigramHash(10240) + SWA50 — val_bpb=1.1477

BPB: 1.1477 | Compliance: LOOKS CLEAN — pure-neural submission, no TTT/SLOT/n-gram-cache

What I found in the code (head SHA f96f6de179f9, file records/track_10min_16mb/2026-03-20_QAT_Int5Int6_BigramHash_Backout_SWA50/train_gpt.py):

Static code review found no TTT adaptation function, no SLOT optimization loop, no n-gram-cache class, and no pre-quant val-token fine-tune. The eval path uses the standard sliding-window stride-64 pattern. The submission is a pure-neural architecture iteration on the standard SP1024/SP4096/SP8192 baseline.

CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.02s, dim=512, layers=10, vocab=1024, code=46872 B, SMOKE_TEST_PASS

Verdict: LOOKS CLEAN.

Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending the usual record-track checks (3-seed validation, under-16MB artifact cap, ≤600s train + ≤600s eval on 8×H100 SXM). No compliance flags from the classification pass — this looks like a clean pure-neural iteration on the standard baseline.

Auto-classification caveat: this review was drafted by the AST-based classifier. If there's a non-standard eval mechanism (logit postprocessing, hedge mixing, etc.) that I missed because it's factored into a helper file or a non-standard function name, please flag it and I'll re-run the audit manually.


Reviewed by @MatoTeziTankaThe Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.02s, dim=512, layers=10, vocab=1024, code=46872 B, SMOKE_TEST_PASS. Classification via deterministic AST-based classify_prs.py (pattern bank derived from ~65 manually-reviewed PRs earlier in the 2026-04-11 sweep). This review was auto-drafted from a template and spot-checked before posting — if the template misread your code, please call it out so I can iterate the classifier.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants