GPTQ Int6 + SGD Test-Time Training — A800 1.1190 bpb#610
GPTQ Int6 + SGD Test-Time Training — A800 1.1190 bpb#610ChaosCodes wants to merge 1 commit intoopenai:mainfrom
Conversation
11-layer 512d GPT with PR#414 10-technique stack + LeakyReLU² activation, post-training GPTQ int6 quantization, and SGD test-time training with cosine LR decay. Artifact size: 15.75MB (under 16MB limit). Techniques: XSA4, EMA, U-Net skip, SmearGate, BigramHash, PartialRoPE, LNScale, ValueEmbed, LateQAT, SWA, LeakyReLU², GPTQ int6, SGD TTT.
Community Review — GPTQ Int6 + SGD Test-Time Training — A800 1.1190 bpbBPB: 1.1190 | Compliance: LOOKS CLEAN — pure-neural submission, no TTT/SLOT/n-gram-cache What I found in the code (head SHA Static code review found no TTT adaptation function, no SLOT optimization loop, no n-gram-cache class, and no pre-quant val-token fine-tune. The eval path uses the standard sliding-window stride-64 pattern. The submission is a pure-neural architecture iteration on the standard SP1024/SP4096/SP8192 baseline. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.05s, dim=512, layers=11, vocab=1024, code=67718 B, SMOKE_TEST_PASS Verdict: LOOKS CLEAN. Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending the usual record-track checks (3-seed validation, under-16MB artifact cap, ≤600s train + ≤600s eval on 8×H100 SXM). No compliance flags from the classification pass — this looks like a clean pure-neural iteration on the standard baseline. Auto-classification caveat: this review was drafted by the AST-based classifier. If there's a non-standard eval mechanism (logit postprocessing, hedge mixing, etc.) that I missed because it's factored into a helper file or a non-standard function name, please flag it and I'll re-run the audit manually. Reviewed by @MatoTeziTanka — The Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.05s, dim=512, layers=11, vocab=1024, code=67718 B, SMOKE_TEST_PASS. Classification via deterministic AST-based |
Summary
Key Results
Techniques
Architecture (PR#414 stack): XSA4, EMA, U-Net skip, SmearGate, BigramHash, PartialRoPE, LNScale, ValueEmbed, LateQAT, SWA
Novel contributions:
Compression
zstd level-21 with long-distance matching (LDM) for model artifact compression.
Files
train_gpt.py— Full training + GPTQ + TTT evaluation pipelineeval_gptq.py— Standalone GPTQ evaluation scripteval_ttt.py— Standalone TTT evaluation scriptsubmission.json— Structured results metadatatrain.log— Complete training logREADME.md— Detailed writeup with technique descriptions and ablationsSee
records/track_10min_16mb/2026-03-24_GPTQ_TTT/README.mdfor full details.