Non-record: Triton KV-cache backend for autoregressive eval#1153
Non-record: Triton KV-cache backend for autoregressive eval#1153LucasErcolano wants to merge 3 commits intoopenai:mainfrom
Conversation
Community Review — Non-record: Triton KV-cache backend for autoregressive evalCompliance: NEEDS AUTHOR ACTION — What I found: The CPU smoke test on CT2038 (proteus-engine, 128 GB RAM, Triton 3.6.0, flash_attn stub, cutlass_evt_fusion stub) failed at the import step with: A few of the common patterns I've seen for this class of error in the 2026-04-11 sweep:
Recommendation: Could you run Once the parse/import issue is fixed, I'll re-run the compliance audit through the normal pipeline. No other flags identified yet because the audit halts at the import step. Reviewed by @MatoTeziTanka — The Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): IMPORT_FAIL — ModuleNotFoundError: No module named 'triton_kv_ops'. Classification via |
Retraction — this IMPORT_FAIL was a bug in my smoke runnerSorry @LucasErcolano, this one's on me. I re-audited the IMPORT_FAIL I posted above and it was a false positive — the fault is in how my CPU smoke runner set up What happened: The runner imported your Verified at head On the real eval image (Python 3.10, Your PR is not broken by this error. I'm retracting the IMPORT_FAIL classification. I'll re-queue the full compliance audit (BPB check, n-gram / TTT / SLOT flags, etc.) on the current head and post findings separately. Again — sorry for the noise. These community reviews only work if I actually read what I'm reviewing, and I didn't in this case. |
Community Review — Non-record: Triton KV-cache backend for autoregressive evalBPB: (not parsed — see PR title) | Compliance: LOOKS CLEAN — pure-neural submission, no TTT/SLOT/n-gram-cache What I found in the code (head SHA Static code review found no TTT adaptation function, no SLOT optimization loop, no n-gram-cache class, and no pre-quant val-token fine-tune. The eval path uses the standard sliding-window stride-64 pattern. The submission is a pure-neural architecture iteration on the standard SP1024/SP4096/SP8192 baseline. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.22s, dim=512, layers=9, vocab=1024, code=107777 B, SMOKE_TEST_PASS Verdict: LOOKS CLEAN. Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending the usual record-track checks (3-seed validation, under-16MB artifact cap, ≤600s train + ≤600s eval on 8×H100 SXM). No compliance flags from the classification pass — this looks like a clean pure-neural iteration on the standard baseline. Auto-classification caveat: this review was drafted by the AST-based classifier. If there's a non-standard eval mechanism (logit postprocessing, hedge mixing, etc.) that I missed because it's factored into a helper file or a non-standard function name, please flag it and I'll re-run the audit manually. Reviewed by @MatoTeziTanka — The Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.22s, dim=512, layers=9, vocab=1024, code=107777 B, SMOKE_TEST_PASS. Classification via deterministic AST-based |
Summary
Stacked on top of #1149.
This PR adds Triton-backed KV-cache evaluation paths for the existing non-record submission:
int8_triton: fused grouped-int8 score/apply kernelsqjl_triton: fused QJL sign-score kernel plus grouped value apply kernelThis is a systems PR only. Training/ML behavior stays intentionally close to #1149.
Local Validation
RTX 3090 synthetic benchmark (
seq_len=1024,head_dim=64):int8_backend->int8_triton:2.68xfaster forscore+applyqjl->qjl_triton:2.60xfaster forscore+applyBenchmark-side peak allocator stats from
benchmark_kv_backends.py:int8_backend:peak_alloc=13.71 MiB,peak_over_baseline=3.00 MiBint8_triton:peak_alloc=10.77 MiB,peak_over_baseline=0.03 MiBqjl:peak_alloc=13.76 MiB,peak_over_baseline=3.00 MiBqjl_triton:peak_alloc=10.83 MiB,peak_over_baseline=0.03 MiBEnd-to-end autoregressive smoke eval over 512 validation tokens:
qjl:63.53 tok/sqjl_triton:70.83 tok/srun_kv_backend_selftests(...)passes with the Triton path enabled.Review Notes
This branch is stacked on #1149. The intended review focus is the commit sequence on top of that PR:
b58d15cAdd Triton KV-cache backends and benchmarks2da8d23Report CUDA peak memory in Triton benchmark