Skip to content

10L XSA + EMA + Partial RoPE + LN Scale (val_bpb: 1.1365)#458

Open
ofirkris wants to merge 5 commits intoopenai:mainfrom
ofirkris:tap-mobile-submission
Open

10L XSA + EMA + Partial RoPE + LN Scale (val_bpb: 1.1365)#458
ofirkris wants to merge 5 commits intoopenai:mainfrom
ofirkris:tap-mobile-submission

Conversation

@ofirkris
Copy link
Copy Markdown

Results

Seed val_bpb Artifact Steps
42 1.1365 15,759,319 6491
1337 1.1366 15,820,386 6520
  • 8xH100 SXM, 600s, 92ms/step
  • PyTorch 2.7.0 + FlashAttention 2.8.3
  • No TTT, no test-time adaptation

Techniques

10L 512d, 3x MLP, XSA last 4 layers, EMA 0.997, Partial RoPE 16/64, LN Scale,
SmearGate, BigramHash(10240), Int5 MLP / Int6 attn, FP16 embeds, 3.2% pruning, zstd-22,
sliding window eval stride=64

@mohosy
Copy link
Copy Markdown

mohosy commented Mar 23, 2026

partial rope is interesting havent seen that in many submissions yet. how many dims did you find works best? 16 seems low but if it works it works

@ofirkris
Copy link
Copy Markdown
Author

partial rope is interesting havent seen that in many submissions yet. how many dims did you find works best? 16 seems low but if it works it works

I tested 16 out of 64 dims (25%) based on ablations from other competitive runs on this challenge.

The intuition is that most heads don't need full positional information - leaving 48 dims position-free lets them learn content-based attention patterns.
I didn't sweep extensively (tried 0, 16, 32, 64) and 16 gave the best result on my setup. Would be interesting to see if anyone finds a different sweet spot at larger layer counts.

@MatoTeziTanka
Copy link
Copy Markdown

Community Review — 10L XSA + EMA + Partial RoPE + LN Scale (val_bpb: 1.1365)

BPB: 1.1365 | Compliance: LOOKS CLEAN — pure-neural submission, no TTT/SLOT/n-gram-cache

What I found in the code (head SHA 663a135c138b, file records/track_10min_16mb/2026-03-22_XSA_EMA_PartialRoPE_LNScale_TTT/train_gpt.py):

Static code review found no TTT adaptation function, no SLOT optimization loop, no n-gram-cache class, and no pre-quant val-token fine-tune. The eval path uses the standard sliding-window stride-64 pattern. The submission is a pure-neural architecture iteration on the standard SP1024/SP4096/SP8192 baseline.

CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 1.88s, dim=512, layers=10, vocab=1024, code=58978 B, SMOKE_TEST_PASS

Verdict: LOOKS CLEAN.

Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending the usual record-track checks (3-seed validation, under-16MB artifact cap, ≤600s train + ≤600s eval on 8×H100 SXM). No compliance flags from the classification pass — this looks like a clean pure-neural iteration on the standard baseline.

Auto-classification caveat: this review was drafted by the AST-based classifier. If there's a non-standard eval mechanism (logit postprocessing, hedge mixing, etc.) that I missed because it's factored into a helper file or a non-standard function name, please flag it and I'll re-run the audit manually.


Reviewed by @MatoTeziTankaThe Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 1.88s, dim=512, layers=10, vocab=1024, code=58978 B, SMOKE_TEST_PASS. Classification via deterministic AST-based classify_prs.py (pattern bank derived from ~65 manually-reviewed PRs earlier in the 2026-04-11 sweep). This review was auto-drafted from a template and spot-checked before posting — if the template misread your code, please call it out so I can iterate the classifier.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants