Skip to content

Runpod ar selfgen nextsteps#1419

Closed
windy-schmieder wants to merge 6 commits intoopenai:mainfrom
windy-schmieder:runpod-ar-selfgen-nextsteps
Closed

Runpod ar selfgen nextsteps#1419
windy-schmieder wants to merge 6 commits intoopenai:mainfrom
windy-schmieder:runpod-ar-selfgen-nextsteps

Conversation

@windy-schmieder
Copy link
Copy Markdown

No description provided.

windy-schmieder and others added 6 commits March 25, 2026 00:24
Adds Full Attention Residuals (AttnRes) from Kimi Team paper to the
LeakyReLU² + Legal TTT + Parallel Muon stack. Zero-initialized learned
softmax depth-attention over all preceding layer outputs replaces fixed
x0 global residual. Also updates train_gpt_mlx.py with the same changes
for local Apple Silicon testing.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
… lzma-9

Builds on 2026-03-24 AttnRes stack. Five improvements:
- MTP_NUM_HEADS=1: free training signal, stripped at export (0 artifact bytes)
- GATED_ATTENTION=1: per-head sigmoid gate, ~24KB cost
- VALUE_RESIDUAL=1: value-stream skip connection, ~0 bytes
- TTT Adam optimizer: replaces SGD for better per-chunk adaptation (lr=2e-4)
- lzma preset=9: improved artifact compression, no eval time cost

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Base: LeakyReLU² + Legal TTT + Parallel Muon (1.1194)
- MTP_NUM_HEADS=1: free training signal, stripped at export
- GATED_ATTENTION=1: per-head sigmoid gate, ~24KB artifact cost
- VALUE_RESIDUAL=1: value-stream skip, vr_lambda init [0.0, 1.0]
- lzma preset=9: better compression, no eval cost
- TTT_USE_ADAM added but off by default (SGD preserved from SOTA)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant