Skip to content

11L + Hadamard Rotation + VE128 + cuDNN SDPA (val_bpb: 1.1365, 3-seed mean)#586

Open
EaCognitive wants to merge 2 commits intoopenai:mainfrom
EaCognitive:submission/hadamard-ve128-quip-lite
Open

11L + Hadamard Rotation + VE128 + cuDNN SDPA (val_bpb: 1.1365, 3-seed mean)#586
EaCognitive wants to merge 2 commits intoopenai:mainfrom
EaCognitive:submission/hadamard-ve128-quip-lite

Conversation

@EaCognitive
Copy link
Copy Markdown

@EaCognitive EaCognitive commented Mar 23, 2026

11L + Hadamard Rotation + VE128 + cuDNN SDPA

val_bpb: 1.1365 (sliding window stride=64, 3-seed mean, std 0.0005) | ~15.6 MB | 8xH100 SXM, 600s

3-Seed Results

Seed Steps Pre-quant Sliding BPB Artifact Compression
1337 8098 1.1512 1.1364 15,618,718 1.75x
42 8102 1.1513 1.1361 15,629,540 1.75x
2024 7960 1.1521 1.1370 15,600,361 1.76x

Technique: Data-Free Hadamard Rotation for Int6 Quantization

Walsh-Hadamard rotation applied to weight matrices before int6 per-row quantization. The orthogonal rotation spreads outlier values uniformly, improving zstd compression from 1.70x to 1.76x and reducing quantization gap from 0.0093 to 0.0084 BPB.

This technique is data-free: no calibration samples, no training data access at eval time. The rotation matrix is deterministic from the weight dimension.

No other open or merged PR uses rotation-based quantization.

Compression Enables Architecture

The 0.06x compression improvement recovers 530KB of artifact headroom within the 16MB budget, directly enabling Shared Value Embeddings (VE128 on layers 9-10) which previously overflowed at 44KB headroom.

Ablation

Config Sliding BPB Compression Headroom Quant Gap
Baseline (no Hadamard, no VE) 1.1372 1.70x 44KB 0.0093
+ Hadamard rotation 1.1377 1.78x 712KB 0.0091
+ VE128 (enabled by headroom) 1.1365 1.76x 400KB 0.0084

Findings

  • Hadamard rotation and GPTQ are substitutes at int6 precision. Full GPTQ (actorder + Cholesky) provides zero additional improvement when Hadamard rotation is present. Tested 3 times with identical result.
  • CPU parameter probe guided hyperparameter selection across 9.5M configurations, reducing GPU compute by ~84%.
  • No TTT. No training data access at eval time.

Architecture

11 layers, 512-dim, 8 heads (4 KV heads, GQA), MLP 3x relu-squared, XSA on last 4 layers, Partial RoPE (16/64), LN Scale, U-Net skip connections, SmearGate, BigramHash(2048), EMA 0.997, cuDNN SDPA. Muon lr=0.025 + AdamW lr=0.035. Warmdown 3500 steps (cosine).

Erick Aleman | EA Cognitive | www.eacognitive.com | github.com/eacognitive

Data-free Walsh-Hadamard rotation for int6 quantization. Improves
zstd compression from 1.70x to 1.76x, recovering 530KB of artifact
headroom that enables Shared Value Embeddings (VE128).

No calibration data. No training data access at eval time. No TTT.
3-seed mean: 1.1365 +/- 0.0005 BPB. All artifacts under 16MB.

Erick Aleman | EA Cognitive | www.eacognitive.com
@EaCognitive EaCognitive force-pushed the submission/hadamard-ve128-quip-lite branch from 82e3375 to 0915466 Compare March 24, 2026 20:03
@EaCognitive EaCognitive changed the title Record: 11L + Hadamard Rotation + VE128 + cuDNN SDPA (val_bpb: 1.1365) 11L + Hadamard Rotation + VE128 + cuDNN SDPA (val_bpb: 1.1365, 3-seed mean) Mar 24, 2026
@EaCognitive
Copy link
Copy Markdown
Author

This submission validates our current approach. Ongoing research has identified several highly promising extensions from recent literature that we're ready to test. GPU grant has been submitted. We welcome any feedback. Thank you.

@MatoTeziTanka
Copy link
Copy Markdown

Community Review — 11L + Hadamard Rotation + VE128 + cuDNN SDPA (val_bpb: 1.1365, 3-seed mean)

BPB: 1.1365 | Compliance: LOOKS CLEAN — pure-neural submission, no TTT/SLOT/n-gram-cache

What I found in the code (head SHA 224468130c89, file records/track_10min_16mb/2026-03-24_11L_Hadamard_VE128_cuDNN_1.1365/train_gpt.py):

Static code review found no TTT adaptation function, no SLOT optimization loop, no n-gram-cache class, and no pre-quant val-token fine-tune. The eval path uses the standard sliding-window stride-64 pattern. The submission is a pure-neural architecture iteration on the standard SP1024/SP4096/SP8192 baseline.

CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.03s, dim=512, layers=11, vocab=1024, code=52493 B, SMOKE_TEST_PASS

Verdict: LOOKS CLEAN.

Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending the usual record-track checks (3-seed validation, under-16MB artifact cap, ≤600s train + ≤600s eval on 8×H100 SXM). No compliance flags from the classification pass — this looks like a clean pure-neural iteration on the standard baseline.

Auto-classification caveat: this review was drafted by the AST-based classifier. If there's a non-standard eval mechanism (logit postprocessing, hedge mixing, etc.) that I missed because it's factored into a helper file or a non-standard function name, please flag it and I'll re-run the audit manually.


Reviewed by @MatoTeziTankaThe Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.03s, dim=512, layers=11, vocab=1024, code=52493 B, SMOKE_TEST_PASS. Classification via deterministic AST-based classify_prs.py (pattern bank derived from ~65 manually-reviewed PRs earlier in the 2026-04-11 sweep). This review was auto-drafted from a template and spot-checked before posting — if the template misread your code, please call it out so I can iterate the classifier.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants