Skip to content

Non-record: LeakyReLU(0.5)^2 on SmearGate + BigramHash + Int6 stack (1.1444 bpb)#1256

Open
oidebrett wants to merge 1 commit intoopenai:mainfrom
oidebrett:submission/2026-04-02_LeakyReLU_SmearGate_BigramHash
Open

Non-record: LeakyReLU(0.5)^2 on SmearGate + BigramHash + Int6 stack (1.1444 bpb)#1256
oidebrett wants to merge 1 commit intoopenai:mainfrom
oidebrett:submission/2026-04-02_LeakyReLU_SmearGate_BigramHash

Conversation

@oidebrett
Copy link
Copy Markdown

This PR adds a non-record 16MB submission under records/track_non_record_16mb/2026-04-02_LeakyReLU_SmearGate_BigramHash_Int6_SWA/.

val_bpb: 1.1444 on 8xH100 SXM in 600s (seed 1337).

One-line change on top of PR #162's stack: replaced relu^2 with leaky_relu(0.5)^2 in the MLP activation (idea from PR #549). Ran both back-to-back on the same pod. Base gives 1.1459, this gives 1.1444. Small gain but it's a gain!!

Full stack: SmearGate + BigramHash(4096) + Int6 QAT + SWA(30 ckpts) + zstd-22 + MLP 3x + OrthoInit + Muon WD 0.04 + sliding window eval.

Files

  • README.md
  • submission.json
  • train_gpt.py
  • train_seed1337.log

This is my first submission. I'm new to ML training and this competition has been a great learning experience. Built up from Kaggle T4 -> 1xH100 -> 8xH100 over a couple of weeks. I've applied for the Development Compute Grant and would appreciate being considered. With more compute I'd like to keep experimenting with more layers, better quantization, and some of the architectural ideas that are still underexplored.

Thanks to @raahilshah, @thwu1, @signalrush, @abaybektursun and the whole community for sharing techniques so openly. And hats off to the leaders - You really know your stuff!!!

…nt6 stack

val_bpb 1.1444 on 8xH100 SXM in 600s. One-line activation change
(relu^2 -> leaky_relu(0.5)^2) on top of PR openai#162 stack.
@MatoTeziTanka
Copy link
Copy Markdown

Community Review — Non-record: LeakyReLU(0.5)^2 on SmearGate + BigramHash + Int6 stack (1.1444 bpb)

BPB: 1.1444 | Compliance: LOOKS CLEAN — pure-neural submission, no TTT/SLOT/n-gram-cache

What I found in the code (head SHA adc94d24281d, file records/track_non_record_16mb/2026-04-02_LeakyReLU_SmearGate_BigramHash_Int6_SWA/train_gpt.py):

Static code review found no TTT adaptation function, no SLOT optimization loop, no n-gram-cache class, and no pre-quant val-token fine-tune. The eval path uses the standard sliding-window stride-64 pattern. The submission is a pure-neural architecture iteration on the standard SP1024/SP4096/SP8192 baseline.

CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.09s, dim=512, layers=9, vocab=1024, code=52265 B, SMOKE_TEST_PASS

Verdict: LOOKS CLEAN.

Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending the usual record-track checks (3-seed validation, under-16MB artifact cap, ≤600s train + ≤600s eval on 8×H100 SXM). No compliance flags from the classification pass — this looks like a clean pure-neural iteration on the standard baseline.

Auto-classification caveat: this review was drafted by the AST-based classifier. If there's a non-standard eval mechanism (logit postprocessing, hedge mixing, etc.) that I missed because it's factored into a helper file or a non-standard function name, please flag it and I'll re-run the audit manually.


Reviewed by @MatoTeziTankaThe Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.09s, dim=512, layers=9, vocab=1024, code=52265 B, SMOKE_TEST_PASS. Classification via deterministic AST-based classify_prs.py (pattern bank derived from ~65 manually-reviewed PRs earlier in the 2026-04-11 sweep). This review was auto-drafted from a template and spot-checked before posting — if the template misread your code, please call it out so I can iterate the classifier.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants