Submission/hybrid RWKV token shift#1112
Conversation
Community Review — Submission/hybrid RWKV token shiftCompliance: NEEDS AUTHOR ACTION — What I found: The CPU smoke test on CT2038 (proteus-engine, 128 GB RAM, Triton 3.6.0, flash_attn stub, cutlass_evt_fusion stub) failed at the import step with: A few of the common patterns I've seen for this class of error in the 2026-04-11 sweep:
Recommendation: Could you run Once the parse/import issue is fixed, I'll re-run the compliance audit through the normal pipeline. No other flags identified yet because the audit halts at the import step. Reviewed by @MatoTeziTanka — The Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): IMPORT_FAIL — ModuleNotFoundError: No module named 'kernels'. Classification via |
Non-record submission exploring hybrid transformer architectures that replace most attention layers with a lightweight RWKV-inspired token-shift mixing mechanism. The core idea is that most layers in a transformer only need local context, so full quadratic attention is wasteful for them. Instead, 8 of 11 layers use a simple token-shift operation that blends adjacent tokens via learned per-dimension interpolation weights, while only 3 layers retain quadratic attention with short (128-token) windows (except the final attention layer which keeps full context).
The architecture achieves a 3-seed mean val_bpb of 1.2252 with 17.0M parameters, int6 quantized and zlib compressed to ~15.86 MB. While this does not beat the current SOTA, I believe the token-shift approach is promising for its efficiency — particularly for inference, where the reduced attention overhead could significantly speed up decoding.
Beyond the hybrid architecture, the submission stacks several techniques from the leaderboard: SmearGate, bigram hash embeddings, value embeddings, XSA (cross-head suppression), partial RoPE (16/64 dims), LeakyReLU squared activation, Muon optimizer, EMA with late QAT, and logit softcapping. Full details and ablation notes are in the README.