Skip to content

RecurLoRA v2: Pass Index Embeddings + Low-Rank Adapters on SP8192 Depth Recurrence#1552

Open
Tanush1912 wants to merge 2 commits intoopenai:mainfrom
Tanush1912:submission/recurlora-v2-sp8192
Open

RecurLoRA v2: Pass Index Embeddings + Low-Rank Adapters on SP8192 Depth Recurrence#1552
Tanush1912 wants to merge 2 commits intoopenai:mainfrom
Tanush1912:submission/recurlora-v2-sp8192

Conversation

@Tanush1912
Copy link
Copy Markdown

Summary

  • Pass index embeddings: learned per-pass vectors added to hidden states before repeated layer execution, giving shared weights a "which iteration am I?" signal (3072 params, 6KB)
  • RecurLoRA: rank-2 LoRA corrections on attention projections (Q,K,V,O) for repeated passes (21K params, 42KB)
  • Built on the current frontier stack: SP8192, 3-layer depth recurrence (layers 3-5), parallel residuals L7+, SDClip, MuonEq-R, QK-Gain 5.25, score-first TTT
  • Total novel overhead: 48KB (0.3% of 16MB budget), kept as fp16 passthrough

Why this direction

The current SOTA (PR #1493, 1.0810 BPB) uses 3-layer recurrence where layers 3-5 execute identically on every pass — no mechanism distinguishes pass 1 from pass 3. This submission adds two complementary per-pass specialization mechanisms:

  • Pass embeddings modify the input to shared layers (inspired by Universal Transformers, Dehghani et al. 2019)
  • LoRA corrections modify the attention weights of shared layers

Together they allow shared layers to condition behavior on recurrence depth at negligible cost, without incurring the quantization error amplification that kills deeper recurrence.

Status

Implementation complete and validated:

  • Syntax, class structure, and method signatures verified
  • Pass counting logic confirmed correct across encoder/decoder traversal
  • LoRA shape compatibility with weight banks verified
  • All new parameters are fp16 passthrough, avoiding additional quantization error

Full training runs (3 seeds + ablations) pending compute.

Test plan

  • Full stack without RecurLoRA/pass embeddings — confirm baseline matches ~1.081 BPB
  • Full stack with pass embeddings only — isolate embedding contribution
  • Full stack with RecurLoRA + pass embeddings — measure combined effect (3 seeds)
  • Report mean +/- std across seeds

… stack

Two novel per-pass specialization mechanisms for recurrent layers:
- Pass index embeddings (3072 params): learned vectors added to hidden
  states before repeated execution, inspired by Universal Transformers
- Rank-2 LoRA on attention (21K params): per-pass Q,K,V,O corrections

Built on frontier stack: SP8192, 3-layer recurrence (layers 3-5),
parallel residuals L7+, SDClip (int6 matrix/int8 embed), MuonEq-R,
QK-Gain 5.25, score-first TTT. Total overhead: 48KB (0.3% of budget).
leon2k2k2k added a commit to leon2k2k2k/parameter-golf that referenced this pull request Apr 26, 2026
Three teams proposed iteration embeddings (openai#1552, openai#1554, openai#1640) — all open,
no results yet. MLP-only loop confirmed novel across all ~300 PRs scanned.
Residual 1/L partially covered by frozen alpha in openai#1779 baseline.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Refactoring the Hyperparameters class for v2 dropped these names but
the lr_mul function and Muon LR group setup still reference them.
Added back as env-overridable defaults to keep the existing code path
working without rewriting the optimizer/scheduler logic.

Verified end-to-end on 1xH100 SXM: 50-step smoke test runs cleanly,
loss decreases (10.14 -> 7.46), all stages complete (training, EMA,
compression, eval roundtrip).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant