Skip to content

Add: 11L Complement Training + TTT + No-JEPA submission (val_bpb 1.0855)#1257

Open
BoxiYu wants to merge 2 commits intoopenai:mainfrom
BoxiYu:submit/complement-ttt-no-jepa-10855
Open

Add: 11L Complement Training + TTT + No-JEPA submission (val_bpb 1.0855)#1257
BoxiYu wants to merge 2 commits intoopenai:mainfrom
BoxiYu:submit/complement-ttt-no-jepa-10855

Conversation

@BoxiYu
Copy link
Copy Markdown

@BoxiYu BoxiYu commented Apr 2, 2026

No description provided.

Boxi Yu added 2 commits April 1, 2026 23:12
- Non-record submission: significantly beats SOTA (1.1228) with val_bpb 1.0855
- Complement Training downweights easy tokens for transformer to focus on hard tokens
- TTT (Test-Time Training) fine-tunes on validation set
- Disable JEPA saves ~920K parameters
- Hardware: 8xA100, exceeds 10-min time limit (non-record eligible)
@SPThole
Copy link
Copy Markdown

SPThole commented Apr 2, 2026

This doesnt seem to be valid record submission, it uses A100 with assumption that its 6x slow but in reality its not. What I have observed from my observation weighted loss performs worse as there is time constraint and in limited time due to grad competition model tend to perform worse even on easier tokens. The gain you are getting is from the extra train time.

HateBunnyPlzzz added a commit to Itssshikhar/parameter-golf that referenced this pull request Apr 2, 2026
Approaches revamped (old eval-only approaches removed):
- 01: Low-Rank Factored MLP (18 layers in 16MB via rank-128 MLP factors)
- 02: Reptile Meta-Learning Warmdown (meta-optimize for TTT adaptability)
- 03: SVD + Quantized Factors (13 layers via spectral compression)
- 04: Multi-Token Prediction + BPB-Weighted Loss (training loss innovation)
- 05: Gram-Newton-Schulz + FP8 Training (30% more steps in 10 min)

Unmerged PR research saved to unmerged_runs/:
- PR openai#1263: SLOT (0.9354 BPB, legality contested)
- PR openai#1246: Trinity Ternary (0.9650 BPB)
- PR openai#1241: MDLM Diffusion (0.9901 BPB)
- PR openai#1252: WARP (1.0713 BPP)
- PR openai#1257: Complement Training (1.0855 BPB)
- PR openai#1274: Parallel Residuals + Depth Recurrence (1.0876 BPB)
- PR openai#1260: MuonEq-R + Depth Recurrence (1.0929 BPB)
- PR openai#1254: XSA + LoRA TTT (1.1070 BPB)

Key finding: without eval tricks, frontier is ~1.09 BPB (PR openai#1260)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants