Non-record: Byte-level transformer + JEPA auxiliary loss (val_bpb: 1.1903)#832
Non-record: Byte-level transformer + JEPA auxiliary loss (val_bpb: 1.1903)#832jfprincz wants to merge 1 commit intoopenai:mainfrom
Conversation
Community Review — Non-record: Byte-level transformer + JEPA auxiliary loss (val_bpb: 1.1903)BPB: 1.1903 | Compliance: LOOKS CLEAN — pure-neural submission, no TTT/SLOT/n-gram-cache What I found in the code (head SHA Static code review found no TTT adaptation function, no SLOT optimization loop, no n-gram-cache class, and no pre-quant val-token fine-tune. The eval path uses the standard sliding-window stride-64 pattern. The submission is a pure-neural architecture iteration on the standard SP1024/SP4096/SP8192 baseline. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.03s, dim=512, layers=13, vocab=260, code=71203 B, SMOKE_TEST_PASS Verdict: LOOKS CLEAN. Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending the usual record-track checks (3-seed validation, under-16MB artifact cap, ≤600s train + ≤600s eval on 8×H100 SXM). No compliance flags from the classification pass — this looks like a clean pure-neural iteration on the standard baseline. Auto-classification caveat: this review was drafted by the AST-based classifier. If there's a non-standard eval mechanism (logit postprocessing, hedge mixing, etc.) that I missed because it's factored into a helper file or a non-standard function name, please flag it and I'll re-run the audit manually. Reviewed by @MatoTeziTanka — The Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.03s, dim=512, layers=13, vocab=260, code=71203 B, SMOKE_TEST_PASS. Classification via deterministic AST-based |
Non-record: Byte-level transformer + JEPA auxiliary loss (val_bpb: 1.1903)
val_bpb: 1.1903 (sliding window, stride=512) | 14.4 MB | 8xH100 SXM, 600s
Byte-level autoregressive transformer (vocab 260, no tokenizer) with a lightweight JEPA auxiliary loss contributing ~0.1% of peak gradient signal. Beats the sp1024 baseline (1.2244) by 0.034 BPB.
Ablation: JEPA contribution
JEPA adds 0.01 BPB improvement at 5% overhead. The improvement is consistent across seeds and evaluation methods (pre-quant, post-quant, sliding).
Architecture
13-layer byte-level autoregressive transformer (vocab=260, no BPE/SentencePiece). The primary objective is standard next-byte CE loss. A lightweight JEPA module predicts chunk-level latent representations as an auxiliary signal (λ_max=0.001), adding 0.01 BPB over pure AR. Chunk prediction inspired by LeWM.
Carried from our sp1024 stack: Muon+WD=0.04, EMA 0.997, XSA last 4 layers, Partial RoPE 16 dims, LN Scale, SmearGate, BigramHash(4096,32), OrthoInit+muP, int6+zstd-22, FA3.
Results
Reproducibility (3 seeds)
Mean: 1.1908 | Range: 0.0012 | Submitted: seed 2025
Run command
Data
Uses
fineweb10B_byte260— raw UTF-8 bytes tokenized with byte_offset=4 (IDs 4-259 = byte values 0-255). Converted from sp1024 shards via lookup table decode. No SentencePiece dependency at runtime. BPB = loss / ln(2), no tokenizer correction needed.