diff --git a/records/track_non_record_16mb/2026-04-11_SP8192_SOTA_QK525_TTT_1.0704_1xA100/README.md b/records/track_non_record_16mb/2026-04-11_SP8192_SOTA_QK525_TTT_1.0704_1xA100/README.md new file mode 100644 index 0000000000..3ec9241395 --- /dev/null +++ b/records/track_non_record_16mb/2026-04-11_SP8192_SOTA_QK525_TTT_1.0704_1xA100/README.md @@ -0,0 +1,134 @@ +# Non-record: SP8192 + SOTA recipe on 1xA100 — 1.0704 BPB (TTT) / 1.0727 (sliding) + +**Author:** Huanyi Xie (`xiehuanyi`) +**Date:** 2026-04-11 +**Track:** `non_record_16mb` +**Result:** **val_bpb = 1.07034733** (int6 GPTQ + Brotli + sliding window eval s64 + Legal Score-First TTT) + +## TL;DR + +This runs the **exact PR #1493 SOTA recipe** (SP8192 + 3-layer recurrence + parallel residuals + QK-gain 5.25 + legal score-first TTT + MuonEq-R + SDClip GPTQ + Brotli + byte shuffle) on **1 × A100 80GB for 4 hours** instead of the required 8 × H100 for 10 minutes. The compute budget is roughly equivalent (~80 H100-minute-equivalent), but because it wasn't actually run on the required hardware, this is a non-record submission. + +**Headline result:** +- **TTT BPB: 1.07035** (beats upstream main-leaderboard TTT SOTA 1.0810 by 0.01065) +- **Sliding BPB: 1.07266** (beats upstream main-leaderboard sliding SOTA 1.0827 by 0.01004) +- **Total submission size: 16,019,227 bytes** (under 16 MiB = 16,777,216) + +## Why non-record + +This submission does **not** qualify for `track_10min_16mb` because: +1. Ran on **1×A100 for 4h (14,400s)** instead of **8×H100 for 10 min** +2. A100 doesn't support FlashAttention-3 (Hopper-only); uses PyTorch SDP with the flash backend as a fallback +3. Never verified on actual 8×H100 hardware + +Rough compute comparison: +- H100 BF16: ~990 TFLOPS × 8 × 10 min = ~80 H100-minute-equivalent +- A100 BF16: ~312 TFLOPS × 1 × 240 min = ~76 A100×FLOPs × 3.17 = ~240 A100-minute, approximately matching H100 raw throughput, but without the FA3 speedup. + +So this submission is **compute-equivalent** to the main-leaderboard budget, just not on the required hardware. + +## What's in the recipe + +The training script is a minor adaptation of the PR #1493 script (decompressed from its LZMA+base85 wrapper) with two changes: + +1. **FA3 → FA2/SDP fallback**: On A100, FlashAttention-3 is unavailable, so the attention wrapper falls through to PyTorch's `scaled_dot_product_attention` with the flash backend. A manual GQA head-repeat is added for the SDP path since PyTorch SDP doesn't natively support `num_heads != num_kv_heads`. +2. **Python 3.9 compatibility**: Removed `zip(strict=True)` and nested f-string quotes. +3. **`GRAD_ACCUM_STEPS` env override**: Added so the script can be run with arbitrary grad-accumulation on single-GPU setups (default still `8 // world_size`). + +Everything else is exactly as in the SOTA submission: +- **SP8192** tokenizer (retokenized FineWeb 10B with a 8192-vocab SentencePiece BPE model borrowed from the 74M_Ternary record) +- **11L × 512d × 8H / 4KV GQA**, MLP 4×, LeakyReLU(0.5)² +- **Depth Recurrence**: loops physical layers 3-5 twice, creating 17 virtual layers from 11 physical, activated at `frac=0.35` of training +- **Parallel Residuals** from layer 7+ (last 4 layers only, GPT-J style) +- **QK-Gain init = 5.25** (per-head learnable query scaling, non-default SOTA setting) +- **Skip Gates** (sigmoid-gated U-Net skip connections) +- **MuonEq-R**: row-normalized Muon, Newton-Schulz 5 steps (plus AdamW for embeddings/scalars/head) +- **Partial RoPE (16/64)** + LN Scale +- **EMA decay 0.9965** with warmdown fraction 0.72 +- **MUON_WD = 0.095, ADAM_WD = 0.02, EMBED_WD = 0.085, MATRIX_LR = 0.022** +- **GPTQ with SDClip**: int6 attention/MLP (k=12.85), int8 embeddings (k=20.0), block size 128 +- **Brotli-11 + byte shuffle** compression +- **Legal Score-First TTT**: SGD lr=0.005 momentum=0.9, 3 epochs per 32K-token chunk, cosine LR decay, score-before-update ordering + +## Numbers (seed 1337) + +| Metric | Value | +|---|---| +| Pre-quantization post-EMA BF16 | 1.07610 | +| Int6 quantized (no sliding) | 1.08950 | +| **Int6 + Sliding Window s64** | **1.07266** | +| **Int6 + Sliding + Legal TTT** | **1.07035** ← reported | +| Steps trained | 6371 / 20000 (wallclock capped) | +| Step avg | ~2260 ms (on 1×A100, SDP backend) | +| Peak GPU memory | 41.8 GiB | +| Model params | 35,944,536 | +| Artifact bytes (int6 + brotli) | 15,970,123 | +| Code bytes (uncompressed) | 49,104 | +| **Total submission bytes** | **16,019,227** | + +## Comparison vs. upstream records + +| Submission | Sliding BPB | TTT BPB | +|---|---|---| +| **This (exp62, 1xA100 4h)** | **1.07266** | **1.07035** | +| PR #1493 SOTA (8xH100 10min) | 1.0827 | 1.0810 | +| PR #1477 (SP8192 + ParResid + TTT) | 1.082~ | 1.0822 | +| PR #1413 (SP8192 + QK5 + TTT) | 1.084~ | 1.0828 | +| PR #1412 (SP8192 + ParResid + SDClip) | 1.086~ | 1.0835 | +| PR #1394 (SP8192 + GPTQ Emb + SDClip) | 1.088~ | 1.0856 | + +Delta vs. PR #1493 SOTA: **-0.01004 sliding, -0.01065 TTT**. + +## Comparison with exp60 / exp61 (same training config, different QK_gain) + +Three runs were made with identical seeds/hyperparams except `QK_GAIN_INIT`: + +| Run | QK_GAIN | Int6 | Sliding | TTT | +|---|---|---|---|---| +| exp60 | 5.0 (SOTA default) | 1.09031 | 1.07345 | 1.07137 | +| exp61 | 5.0 + TTT flag at train | 1.09031 | 1.07345 | 1.07137 | +| **exp62** | **5.25** | **1.08950** | **1.07266** | **1.07035** | + +QK_GAIN_INIT=5.25 (the SOTA record's exact value, non-default) consistently helps all three quantization/eval phases, matching the SOTA paper's observation that "monotonic improvement from 4.0 to 5.25" holds. + +## Reproduction + +```bash +pip install brotli sentencepiece +# A100: flash_attn (FA2) optional, falls back to SDP if not installed +# pip install flash-attn --no-build-isolation + +# 1. Download docs and retokenize with SP8192 (one-time, ~2h on CPU) +python data/download_hf_docs_and_tokenize.py \ + --repo-id willdepueoai/parameter-golf \ + --remote-root datasets \ + --output-root data \ + --tokenizer-config data/tokenizer_specs_sp8192.json \ + --skip-byte \ + --reuse-sp-model 8192= + +# 2. Train (4h on 1x A100 80GB) +DATA_DIR=./data/ \ +SEED=1337 \ +VOCAB_SIZE=8192 \ +MAX_WALLCLOCK_SECONDS=14400 \ +QK_GAIN_INIT=5.25 \ +TTT_ENABLED=1 \ +torchrun --standalone --nproc_per_node=1 train_gpt.py +``` + +## Caveats + +- **Single seed (1337) only.** A 3-seed mean (e.g. 42, 314, 999) has not been run. The main-leaderboard SOTA reports 3-seed mean/std; this submission is single-seed for time reasons. +- **Non-record hardware.** Not verified on 8×H100; used 4h on 1×A100. +- Two earlier runs (exp60, exp62) crashed with SIGSEGV at the end of their own eval pipelines (torch.compile recompile issue when creating a fresh GPT instance for eval after training). The same saved quantized artifacts were then evaluated successfully via a standalone `eval_only.py` script. The reported numbers come from the standalone eval. +- The `grad_accum=2` variant (exp63/64) OOM'd: the SOTA model with MLP 4× + depth recurrence has a per-micro-batch footprint larger than the simpler v2_full_stack model from earlier rounds. + +## Files + +- `README.md` (this file) +- `submission.json` +- `train_gpt.py` — A100-adapted SOTA script (FA3→SDP fallback, Python 3.9 compat, GRAD_ACCUM_STEPS env override) +- `final_model.int6.ptz` — 15.97 MB int6+brotli quantized artifact +- `train_seed1337.log` — full training log +- `eval_seed1337.log` — standalone eval log (sliding + TTT numbers) diff --git a/records/track_non_record_16mb/2026-04-11_SP8192_SOTA_QK525_TTT_1.0704_1xA100/eval_seed1337.log b/records/track_non_record_16mb/2026-04-11_SP8192_SOTA_QK525_TTT_1.0704_1xA100/eval_seed1337.log new file mode 100644 index 0000000000..be3493e4b3 --- /dev/null +++ b/records/track_non_record_16mb/2026-04-11_SP8192_SOTA_QK525_TTT_1.0704_1xA100/eval_seed1337.log @@ -0,0 +1,5 @@ +quantized val_loss:2.81379763 val_bpb:1.08949971 eval_time:101404ms +quantized_sliding_window val_loss:2.77029639 val_bpb:1.07265608 eval_time:1580310ms +ttt:start chunks=1238 ttt_lr=0.005 ttt_epochs=3 +quantized_ttt val_loss:2.76433372 val_bpb:1.07034733 eval_time:2775717ms +EXIT_CODE=0 diff --git a/records/track_non_record_16mb/2026-04-11_SP8192_SOTA_QK525_TTT_1.0704_1xA100/final_model.int6.ptz b/records/track_non_record_16mb/2026-04-11_SP8192_SOTA_QK525_TTT_1.0704_1xA100/final_model.int6.ptz new file mode 100644 index 0000000000..bd96cbcbf3 Binary files /dev/null and b/records/track_non_record_16mb/2026-04-11_SP8192_SOTA_QK525_TTT_1.0704_1xA100/final_model.int6.ptz differ diff --git a/records/track_non_record_16mb/2026-04-11_SP8192_SOTA_QK525_TTT_1.0704_1xA100/requirements.txt b/records/track_non_record_16mb/2026-04-11_SP8192_SOTA_QK525_TTT_1.0704_1xA100/requirements.txt new file mode 100644 index 0000000000..898e905c4c --- /dev/null +++ b/records/track_non_record_16mb/2026-04-11_SP8192_SOTA_QK525_TTT_1.0704_1xA100/requirements.txt @@ -0,0 +1,4 @@ +sentencepiece +brotli +# A100: FA2 optional, falls back to PyTorch SDP (flash backend) if unavailable. +# pip install flash-attn --no-build-isolation diff --git a/records/track_non_record_16mb/2026-04-11_SP8192_SOTA_QK525_TTT_1.0704_1xA100/submission.json b/records/track_non_record_16mb/2026-04-11_SP8192_SOTA_QK525_TTT_1.0704_1xA100/submission.json new file mode 100644 index 0000000000..6208f61d43 --- /dev/null +++ b/records/track_non_record_16mb/2026-04-11_SP8192_SOTA_QK525_TTT_1.0704_1xA100/submission.json @@ -0,0 +1,80 @@ +{ + "author": "Huanyi Xie", + "github_id": "xiehuanyi", + "name": "SP8192 + 3L Recurrence + Parallel Residuals + QK-Gain 5.25 + Legal TTT \u2014 1.07035 BPB (1xA100, non-record)", + "blurb": "Reproduces the PR #1493 SOTA 1.0810 recipe (SP8192 + 3-layer recurrence + parallel residuals layer 7+ + QK-Gain 5.25 + Legal Score-First TTT + MuonEq-R + SDClip GPTQ + Brotli) on 1xA100 for 4h instead of 8xH100 for 10 min. Beats the upstream main-leaderboard SOTA by 0.0107 BPB on TTT and 0.0101 BPB on sliding window. Non-record because it ran on 1xA100 (not required 8xH100), but compute-equivalent at ~80 H100-minute budget.", + "date": "2026-04-11", + "track": "non_record_16mb", + "val_loss": 2.76433372, + "val_bpb": 1.07034733, + "seeds": [1337], + "seed_results": { + "1337": { + "val_loss_pre_quant": 2.77918548, + "val_bpb_pre_quant": 1.07609792, + "val_loss_quantized": 2.81379763, + "val_bpb_quantized": 1.08949971, + "val_loss_sliding": 2.77029639, + "val_bpb_sliding": 1.07265608, + "val_loss_ttt": 2.76433372, + "val_bpb_ttt": 1.07034733, + "artifact_bytes": 15970123, + "code_bytes": 49104, + "total_submission_bytes": 16019227, + "steps": 6371, + "step_avg_ms": 2259.7 + } + }, + "artifact_bytes_max": 15970123, + "bytes_total": 16019227, + "train_steps_mean": 6371, + "hardware": "1x NVIDIA A100 80GB SXM4 (IBEX cluster, KAUST)", + "pytorch_version": "2.8.0+cu128", + "cuda_version": "12.8", + "python_version": "3.9.18", + "attn_backend": "PyTorch SDP (flash backend; FA3 unavailable on A100)", + "max_wallclock_seconds": 14400, + "train_seq_len": 2048, + "eval_seq_len": 2048, + "train_batch_tokens": 786432, + "num_layers": 11, + "model_dim": 512, + "mlp_mult": 4.0, + "vocab_size": 8192, + "num_heads": 8, + "num_kv_heads": 4, + "qk_gain_init": 5.25, + "num_loops": 2, + "loop_start": 3, + "loop_end": 5, + "enable_looping_at": 0.35, + "parallel_residual_start": 7, + "skip_gates_enabled": true, + "muon_row_normalize": true, + "muon_wd": 0.095, + "adam_wd": 0.02, + "embed_wd": 0.085, + "matrix_lr": 0.022, + "ema_decay": 0.9965, + "warmdown_frac": 0.72, + "ln_scale": true, + "rope_dims": 16, + "compressor": "brotli", + "matrix_bits": 6, + "embed_bits": 8, + "matrix_clip_sigmas": 12.85, + "embed_clip_sigmas": 20.0, + "ttt_enabled": true, + "ttt_lr": 0.005, + "ttt_epochs": 3, + "ttt_momentum": 0.9, + "ttt_chunk_tokens": 32768, + "model_params": 35944536, + "upstream_sota_ref": { + "pr": 1493, + "val_bpb_sliding": 1.0827, + "val_bpb_ttt": 1.0810, + "delta_vs_us_sliding": -0.01004, + "delta_vs_us_ttt": -0.01065 + } +} diff --git a/records/track_non_record_16mb/2026-04-11_SP8192_SOTA_QK525_TTT_1.0704_1xA100/train_gpt.py b/records/track_non_record_16mb/2026-04-11_SP8192_SOTA_QK525_TTT_1.0704_1xA100/train_gpt.py new file mode 100644 index 0000000000..bafbccf9e6 --- /dev/null +++ b/records/track_non_record_16mb/2026-04-11_SP8192_SOTA_QK525_TTT_1.0704_1xA100/train_gpt.py @@ -0,0 +1,486 @@ +import collections,copy,glob,io,lzma,math,os +from pathlib import Path +import random,re,subprocess,sys,time,uuid,numpy as np,sentencepiece as spm,torch,torch.distributed as dist,torch.nn.functional as F +from torch.nn.parallel import DistributedDataParallel as DDP +from torch import Tensor,nn +try: + from flash_attn import flash_attn_func as _fa2 + _ATTN_BACKEND='fa2' +except ImportError: + _fa2=None + _ATTN_BACKEND='sdp' +def flash_attn_3_func(q,k,v,causal=True): + if _ATTN_BACKEND=='fa2': + return _fa2(q,k,v,causal=causal) + B,T,H,D=q.shape;Hkv=k.shape[2] + if H!=Hkv: + rep=H//Hkv + k=k.unsqueeze(3).expand(B,T,Hkv,rep,D).reshape(B,T,H,D) + v=v.unsqueeze(3).expand(B,T,Hkv,rep,D).reshape(B,T,H,D) + q2,k2,v2=q.transpose(1,2),k.transpose(1,2),v.transpose(1,2) + y=F.scaled_dot_product_attention(q2,k2,v2,is_causal=causal) + return y.transpose(1,2) +class Hyperparameters:data_dir=os.environ.get('DATA_DIR','./data/');seed=int(os.environ.get('SEED',1337));run_id=os.environ.get('RUN_ID',str(uuid.uuid4()));iterations=int(os.environ.get('ITERATIONS',20000));warmdown_frac=float(os.environ.get('WARMDOWN_FRAC',.72));warmup_steps=int(os.environ.get('WARMUP_STEPS',20));train_batch_tokens=int(os.environ.get('TRAIN_BATCH_TOKENS',786432));train_seq_len=int(os.environ.get('TRAIN_SEQ_LEN',2048));train_log_every=int(os.environ.get('TRAIN_LOG_EVERY',500));max_wallclock_seconds=float(os.environ.get('MAX_WALLCLOCK_SECONDS',6e2));val_batch_tokens=int(os.environ.get('VAL_BATCH_TOKENS',524288));eval_seq_len=int(os.environ.get('EVAL_SEQ_LEN',2048));val_loss_every=int(os.environ.get('VAL_LOSS_EVERY',4000));sliding_window_enabled=bool(int(os.environ.get('SLIDING_WINDOW_ENABLED','1')));vocab_size=int(os.environ.get('VOCAB_SIZE',8192));num_layers=int(os.environ.get('NUM_LAYERS',11));xsa_last_n=int(os.environ.get('XSA_LAST_N',11));model_dim=int(os.environ.get('MODEL_DIM',512));embedding_dim=int(os.environ.get('EMBEDDING_DIM',512));num_kv_heads=int(os.environ.get('NUM_KV_HEADS',4));num_heads=int(os.environ.get('NUM_HEADS',8));mlp_mult=float(os.environ.get('MLP_MULT',4.));skip_gates_enabled=bool(int(os.environ.get('SKIP_GATES_ENABLED','1')));tie_embeddings=bool(int(os.environ.get('TIE_EMBEDDINGS','1')));logit_softcap=float(os.environ.get('LOGIT_SOFTCAP',3e1));rope_base=float(os.environ.get('ROPE_BASE',1e4));rope_dims=int(os.environ.get('ROPE_DIMS',16));rope_train_seq_len=int(os.environ.get('ROPE_TRAIN_SEQ_LEN',2048));ln_scale=bool(int(os.environ.get('LN_SCALE','1')));qk_gain_init=float(os.environ.get('QK_GAIN_INIT',5.));num_loops=int(os.environ.get('NUM_LOOPS',2));loop_start=int(os.environ.get('LOOP_START',3));loop_end=int(os.environ.get('LOOP_END',5));enable_looping_at=float(os.environ.get('ENABLE_LOOPING_AT',.35));parallel_residual_start=int(os.environ.get('PARALLEL_RESIDUAL_START',7));min_lr=float(os.environ.get('MIN_LR',.0));embed_lr=float(os.environ.get('EMBED_LR',.6));head_lr=float(os.environ.get('HEAD_LR',.008));tied_embed_lr=float(os.environ.get('TIED_EMBED_LR',.03));tied_embed_init_std=float(os.environ.get('TIED_EMBED_INIT_STD',.005));matrix_lr=float(os.environ.get('MATRIX_LR',.022));scalar_lr=float(os.environ.get('SCALAR_LR',.02));muon_momentum=float(os.environ.get('MUON_MOMENTUM',.99));muon_backend_steps=int(os.environ.get('MUON_BACKEND_STEPS',5));muon_momentum_warmup_start=float(os.environ.get('MUON_MOMENTUM_WARMUP_START',.92));muon_momentum_warmup_steps=int(os.environ.get('MUON_MOMENTUM_WARMUP_STEPS',1500));muon_row_normalize=bool(int(os.environ.get('MUON_ROW_NORMALIZE','1')));beta1=float(os.environ.get('BETA1',.9));beta2=float(os.environ.get('BETA2',.95));adam_eps=float(os.environ.get('ADAM_EPS',1e-08));grad_clip_norm=float(os.environ.get('GRAD_CLIP_NORM',.3));eval_stride=int(os.environ.get('EVAL_STRIDE',64));muon_beta2=float(os.environ.get('MUON_BETA2',.95));adam_wd=float(os.environ.get('ADAM_WD',.02));muon_wd=float(os.environ.get('MUON_WD',.095));embed_wd=float(os.environ.get('EMBED_WD',.085));ema_decay=float(os.environ.get('EMA_DECAY',.9965));ttt_enabled=bool(int(os.environ.get('TTT_ENABLED','0')));ttt_lr=float(os.environ.get('TTT_LR',.005));ttt_epochs=int(os.environ.get('TTT_EPOCHS',3));ttt_momentum=float(os.environ.get('TTT_MOMENTUM',.9));ttt_chunk_tokens=int(os.environ.get('TTT_CHUNK_TOKENS',32768));etlb_enabled=bool(int(os.environ.get('ETLB_ENABLED','0')));etlb_lr=float(os.environ.get('ETLB_LR',.05));etlb_steps=int(os.environ.get('ETLB_STEPS',5));etlb_clip=float(os.environ.get('ETLB_CLIP',3.));compressor=os.environ.get('COMPRESSOR','brotli');gptq_calibration_batches=int(os.environ.get('GPTQ_CALIBRATION_BATCHES',64));gptq_reserve_seconds=float(os.environ.get('GPTQ_RESERVE_SECONDS',12.));matrix_bits=int(os.environ.get('MATRIX_BITS',6));embed_bits=int(os.environ.get('EMBED_BITS',8));matrix_clip_sigmas=float(os.environ.get('MATRIX_CLIP_SIGMAS',12.85));embed_clip_sigmas=float(os.environ.get('EMBED_CLIP_SIGMAS',2e1));distributed='RANK'in os.environ and'WORLD_SIZE'in os.environ;rank=int(os.environ.get('RANK','0'));world_size=int(os.environ.get('WORLD_SIZE','1'));local_rank=int(os.environ.get('LOCAL_RANK','0'));is_main_process=rank==0;grad_accum_steps=int(os.environ.get('GRAD_ACCUM_STEPS',8//world_size));datasets_dir=os.path.join(data_dir,'datasets',f"fineweb10B_sp{vocab_size}");train_files=os.path.join(datasets_dir,'fineweb_train_*.bin');val_files=os.path.join(datasets_dir,'fineweb_val_*.bin');tokenizer_path=os.path.join(data_dir,'tokenizers',f"fineweb_{vocab_size}_bpe.model");logfile=f"logs/{run_id}.txt";model_path='final_model.pt';quantized_model_path='final_model.int6.ptz' +_logger_hparams=None +def set_logging_hparams(h):global _logger_hparams;_logger_hparams=h +def log(msg,console=True): + if _logger_hparams is None:print(msg);return + if _logger_hparams.is_main_process: + if console:print(msg) + if _logger_hparams.logfile is not None: + with open(_logger_hparams.logfile,'a',encoding='utf-8')as f:print(msg,file=f) +class ValidationData: + def __init__(self,h,device): + self.sp=spm.SentencePieceProcessor(model_file=h.tokenizer_path) + if int(self.sp.vocab_size())!=h.vocab_size:raise ValueError(f"VOCAB_SIZE={h.vocab_size} does not match tokenizer vocab_size={int(self.sp.vocab_size())}") + self.val_tokens=load_validation_tokens(h.val_files,h.eval_seq_len);self.base_bytes_lut,self.has_leading_space_lut,self.is_boundary_token_lut=build_sentencepiece_luts(self.sp,h.vocab_size,device) +def build_sentencepiece_luts(sp,vocab_size,device): + sp_vocab_size=int(sp.vocab_size());assert sp.piece_to_id('▁')!=sp.unk_id(),"Tokenizer must have '▁' (space) as its own token for correct BPB byte counting";table_size=max(sp_vocab_size,vocab_size);base_bytes_np=np.zeros((table_size,),dtype=np.int16);has_leading_space_np=np.zeros((table_size,),dtype=np.bool_);is_boundary_token_np=np.ones((table_size,),dtype=np.bool_) + for token_id in range(sp_vocab_size): + if sp.is_control(token_id)or sp.is_unknown(token_id)or sp.is_unused(token_id):continue + is_boundary_token_np[token_id]=False + if sp.is_byte(token_id):base_bytes_np[token_id]=1;continue + piece=sp.id_to_piece(token_id) + if piece.startswith('▁'):has_leading_space_np[token_id]=True;piece=piece[1:] + base_bytes_np[token_id]=len(piece.encode('utf-8')) + return torch.tensor(base_bytes_np,dtype=torch.int16,device=device),torch.tensor(has_leading_space_np,dtype=torch.bool,device=device),torch.tensor(is_boundary_token_np,dtype=torch.bool,device=device) +def load_validation_tokens(pattern,seq_len): + files=[Path(p)for p in sorted(glob.glob(pattern))] + if not files:raise FileNotFoundError(f"No files found for pattern: {pattern}") + tokens=torch.cat([load_data_shard(file)for file in files]).contiguous();usable=(tokens.numel()-1)//seq_len*seq_len + if usable<=0:raise ValueError(f"Validation split is too short for TRAIN_SEQ_LEN={seq_len}") + return tokens[:usable+1] +def load_data_shard(file): + header_bytes=256*np.dtype('0 else 0;num_sequences=(self.num_tokens[si]-1-phase)//self.seq_len;sequence_order=self.rng.permutation(num_sequences);self.start_inds[si]=(phase+sequence_order*self.seq_len).tolist() + def next_batch(self,global_tokens,grad_accum_steps): + device_tokens=global_tokens//(self.world_size*grad_accum_steps);device_batch_size=device_tokens//self.seq_len;remaining=np.array([len(s)for s in self.start_inds],dtype=np.float64);x=torch.empty((device_batch_size,self.seq_len),dtype=torch.int64);y=torch.empty((device_batch_size,self.seq_len),dtype=torch.int64) + for bi in range(device_batch_size): + total=remaining.sum() + if total<=0: + for si in range(len(self.files)):self._reset_shard(si) + remaining=np.array([len(s)for s in self.start_inds],dtype=np.float64);total=remaining.sum() + probs=remaining/total;si=int(self.rng.choice(len(self.files),p=probs));start_ind=self.start_inds[si].pop();remaining[si]-=1;mm=_get_shard_memmap(self.files[si]);window=torch.as_tensor(np.array(mm[start_ind:start_ind+self.seq_len+1],dtype=np.int64));x[bi]=window[:-1];y[bi]=window[1:] + return x.to(self.device,non_blocking=True),y.to(self.device,non_blocking=True) +class RMSNorm(nn.Module): + def __init__(self,eps=None):super().__init__();self.eps=eps + def forward(self,x):return F.rms_norm(x,(x.size(-1),),eps=self.eps) +class CastedLinear(nn.Linear): + def forward(self,x):w=self.weight.to(x.dtype);bias=self.bias.to(x.dtype)if self.bias is not None else None;return F.linear(x,w,bias) +class Rotary(nn.Module): + def __init__(self,dim,base=1e4,train_seq_len=1024,rope_dims=0):super().__init__();self.dim=dim;self.base=base;self.train_seq_len=train_seq_len;self.rope_dims=rope_dims if rope_dims>0 else dim;inv_freq=1./base**(torch.arange(0,self.rope_dims,2,dtype=torch.float32)/self.rope_dims);self.register_buffer('inv_freq',inv_freq,persistent=False);self._seq_len_cached=0;self._cos_cached=None;self._sin_cached=None + def forward(self,seq_len,device,dtype): + if self._cos_cached is None or self._sin_cached is None or self._seq_len_cached!=seq_len or self._cos_cached.device!=device: + rd=self.rope_dims + if seq_len>self.train_seq_len:scale=seq_len/self.train_seq_len;new_base=self.base*scale**(rd/(rd-2));inv_freq=1./new_base**(torch.arange(0,rd,2,dtype=torch.float32,device=device)/rd) + else:inv_freq=self.inv_freq.to(device) + t=torch.arange(seq_len,device=device,dtype=inv_freq.dtype);freqs=torch.outer(t,inv_freq);self._cos_cached=freqs.cos()[None,:,None,:];self._sin_cached=freqs.sin()[None,:,None,:];self._seq_len_cached=seq_len + return self._cos_cached.to(dtype=dtype),self._sin_cached.to(dtype=dtype) +def apply_rotary_emb(x,cos,sin,rope_dims=0): + if rope_dims>0 and rope_dims0: + head_dim=h.model_dim//h.num_heads + for block in self.blocks:block.attn.rope_dims=h.rope_dims;block.attn.rotary=Rotary(head_dim,base=h.rope_base,train_seq_len=h.train_seq_len,rope_dims=h.rope_dims) + self.final_norm=RMSNorm();self.lm_head=None if h.tie_embeddings else CastedLinear(h.embedding_dim,h.vocab_size,bias=False) + if self.lm_head is not None:self.lm_head._zero_init=True + if h.xsa_last_n>0: + for i in range(max(0,h.num_layers-h.xsa_last_n),h.num_layers):self.blocks[i].attn.use_xsa=True + if h.parallel_residual_start>=0: + for i in range(h.parallel_residual_start,h.num_layers):self.blocks[i].parallel=True + self.looping_active=False + if h.num_loops>0: + loop_seg=list(range(h.loop_start,h.loop_end+1));all_indices=list(range(h.loop_start)) + for _ in range(h.num_loops+1):all_indices.extend(loop_seg) + all_indices.extend(range(h.loop_end+1,h.num_layers));num_enc=len(all_indices)//2;self.encoder_indices=all_indices[:num_enc];self.decoder_indices=all_indices[num_enc:] + else:self.encoder_indices=list(range(self.num_encoder_layers));self.decoder_indices=list(range(self.num_encoder_layers,h.num_layers)) + self.num_skip_weights=min(len(self.encoder_indices),len(self.decoder_indices));self.skip_weights=nn.Parameter(torch.ones(self.num_skip_weights,h.model_dim,dtype=torch.float32));self.skip_gates=nn.Parameter(torch.zeros(self.num_skip_weights,h.model_dim,dtype=torch.float32))if h.skip_gates_enabled else None;self._init_weights() + def _init_weights(self): + if self.tie_embeddings:nn.init.normal_(self.tok_emb.weight,mean=.0,std=self.tied_embed_init_std) + for(name,module)in self.named_modules(): + if isinstance(module,nn.Linear): + if getattr(module,'_zero_init',False):nn.init.zeros_(module.weight) + elif module.weight.ndim==2 and module.weight.shape[0]>=64 and module.weight.shape[1]>=64:nn.init.orthogonal_(module.weight,gain=1.) + def forward_logits(self,input_ids): + x=self.tok_emb(input_ids);x=F.rms_norm(x,(x.size(-1),)) + if self.embed_proj is not None:x=self.embed_proj(x) + x0=x;skips=[];enc_iter=self.encoder_indices if self.looping_active else range(self.num_encoder_layers);dec_iter=self.decoder_indices if self.looping_active else range(self.num_encoder_layers,self.num_encoder_layers+self.num_decoder_layers) + for i in enc_iter:x=self.blocks[i](x,x0);skips.append(x) + for(skip_idx,i)in enumerate(dec_iter): + if skip_idxG.size(1) + if transposed:X=X.T + for _ in range(steps):A=X@X.T;B=b*A+c*A@A;X=a*X+B@X + return X.T if transposed else X +class Muon(torch.optim.Optimizer): + def __init__(self,params,lr,momentum,backend_steps,nesterov=True,weight_decay=.0,row_normalize=False):super().__init__(params,dict(lr=lr,momentum=momentum,backend_steps=backend_steps,nesterov=nesterov,weight_decay=weight_decay,row_normalize=row_normalize)) + @torch.no_grad() + def step(self,closure=None): + loss=None + if closure is not None: + with torch.enable_grad():loss=closure() + distributed=dist.is_available()and dist.is_initialized();world_size=dist.get_world_size()if distributed else 1;rank=dist.get_rank()if distributed else 0 + for group in self.param_groups: + params=group['params'] + if not params:continue + lr=group['lr'];momentum=group['momentum'];backend_steps=group['backend_steps'];nesterov=group['nesterov'];total_params=sum(int(p.numel())for p in params);updates_flat=torch.zeros(total_params,device=params[0].device,dtype=torch.bfloat16);curr=0 + for(i,p)in enumerate(params): + if i%world_size==rank and p.grad is not None: + g=p.grad;state=self.state[p] + if'momentum_buffer'not in state:state['momentum_buffer']=torch.zeros_like(g) + buf=state['momentum_buffer'];buf.mul_(momentum).add_(g) + if nesterov:g=g.add(buf,alpha=momentum) + if group.get('row_normalize',False):row_norms=g.float().norm(dim=-1,keepdim=True).clamp_min(1e-07);g=g/row_norms.to(g.dtype) + g=zeropower_via_newtonschulz5(g,steps=backend_steps);g*=max(1,g.size(0)/g.size(1))**.5;updates_flat[curr:curr+p.numel()]=g.reshape(-1) + curr+=p.numel() + if distributed:dist.all_reduce(updates_flat,op=dist.ReduceOp.SUM) + wd=group.get('weight_decay',.0);curr=0 + for p in params: + if wd>.0:p.data.mul_(1.-lr*wd) + g=updates_flat[curr:curr+p.numel()].view_as(p).to(dtype=p.dtype);p.add_(g,alpha=-lr);curr+=p.numel() + return loss +CONTROL_TENSOR_NAME_PATTERNS=tuple(pattern for pattern in os.environ.get('CONTROL_TENSOR_NAME_PATTERNS','attn_scale,attn_scales,mlp_scale,mlp_scales,resid_mix,resid_mixes,q_gain,skip_weight,skip_weights,skip_gates').split(',')if pattern) +class Optimizers: + def __init__(self,h,base_model): + block_named_params=list(base_model.blocks.named_parameters());matrix_params=[p for(name,p)in block_named_params if p.ndim==2 and not any(pattern in name for pattern in CONTROL_TENSOR_NAME_PATTERNS)];scalar_params=[p for(name,p)in block_named_params if p.ndim<2 or any(pattern in name for pattern in CONTROL_TENSOR_NAME_PATTERNS)] + if base_model.skip_weights.numel()>0:scalar_params.append(base_model.skip_weights) + if base_model.skip_gates is not None and base_model.skip_gates.numel()>0:scalar_params.append(base_model.skip_gates) + token_lr=h.tied_embed_lr if h.tie_embeddings else h.embed_lr;tok_params=[{'params':[base_model.tok_emb.weight],'lr':token_lr,'base_lr':token_lr}];self.optimizer_tok=torch.optim.AdamW(tok_params,betas=(h.beta1,h.beta2),eps=h.adam_eps,weight_decay=h.embed_wd,fused=True);self.optimizer_muon=Muon(matrix_params,lr=h.matrix_lr,momentum=h.muon_momentum,backend_steps=h.muon_backend_steps,weight_decay=h.muon_wd,row_normalize=h.muon_row_normalize) + for group in self.optimizer_muon.param_groups:group['base_lr']=h.matrix_lr + self.optimizer_scalar=torch.optim.AdamW([{'params':scalar_params,'lr':h.scalar_lr,'base_lr':h.scalar_lr}],betas=(h.beta1,h.beta2),eps=h.adam_eps,weight_decay=h.adam_wd,fused=True);self.optimizers=[self.optimizer_tok,self.optimizer_muon,self.optimizer_scalar] + if base_model.lm_head is not None:self.optimizer_head=torch.optim.Adam([{'params':[base_model.lm_head.weight],'lr':h.head_lr,'base_lr':h.head_lr}],betas=(h.beta1,h.beta2),eps=h.adam_eps,fused=True);self.optimizers.insert(1,self.optimizer_head) + else:self.optimizer_head=None + def __iter__(self):return iter(self.optimizers) + def zero_grad_all(self): + for opt in self.optimizers:opt.zero_grad(set_to_none=True) + def step(self): + for opt in self.optimizers:opt.step() + self.zero_grad_all() +def restore_fp32_params(model): + for module in model.modules(): + if isinstance(module,CastedLinear):module.float() + for(name,param)in model.named_parameters(): + if(param.ndim<2 or any(pattern in name for pattern in CONTROL_TENSOR_NAME_PATTERNS))and param.dtype!=torch.float32:param.data=param.data.float() +def collect_hessians(model,train_loader,h,device,n_calibration_batches=64): + hessians={};hooks=[] + def make_hook(name): + def hook_fn(module,inp,out): + x=inp[0].detach().float() + if x.ndim==3:x=x.reshape(-1,x.shape[-1]) + if name not in hessians:hessians[name]=torch.zeros(x.shape[1],x.shape[1],dtype=torch.float32,device=device) + hessians[name].addmm_(x.T,x) + return hook_fn + for(name,module)in model.named_modules(): + if isinstance(module,CastedLinear)and module.weight.numel()>65536: + cat=classify_param(name+'.weight') + if cat in('mlp','attn'):hooks.append(module.register_forward_hook(make_hook(name+'.weight'))) + if model.tie_embeddings: + hook_module=model.head_proj if model.head_proj is not None else model.final_norm + def make_output_hook(name): + def hook_fn(module,inp,out): + x=out.detach().float() + if x.ndim==3:x=x.reshape(-1,x.shape[-1]) + if name not in hessians:hessians[name]=torch.zeros(x.shape[1],x.shape[1],dtype=torch.float32,device=device) + hessians[name].addmm_(x.T,x) + return hook_fn + hooks.append(hook_module.register_forward_hook(make_output_hook('tok_emb.weight'))) + model.eval() + with torch.no_grad(): + for _ in range(n_calibration_batches):x,_=train_loader.next_batch(h.train_batch_tokens,h.grad_accum_steps);model.forward_logits(x) + for hook in hooks:hook.remove() + for name in hessians:hessians[name]=hessians[name].cpu()/n_calibration_batches + return hessians +def gptq_quantize_weight(w,H,clip_sigmas=3.,clip_range=63,block_size=128): + W_orig=w.float().clone();rows,cols=W_orig.shape;H=H.float().clone();dead=torch.diag(H)==0;H[dead,dead]=1;damp=.01*H.diag().mean();H.diagonal().add_(damp);perm=torch.argsort(H.diag(),descending=True);invperm=torch.argsort(perm);W_perm=W_orig[:,perm].clone();W_perm[:,dead[perm]]=0;H=H[perm][:,perm];Hinv=torch.cholesky_inverse(torch.linalg.cholesky(H));Hinv=torch.linalg.cholesky(Hinv,upper=True);row_std=W_orig.std(dim=1);s=(clip_sigmas*row_std/clip_range).clamp_min(1e-10).to(torch.float16);sf=s.float();Q=torch.zeros(rows,cols,dtype=torch.int8);W_work=W_perm.clone() + for i1 in range(0,cols,block_size): + i2=min(i1+block_size,cols);W_block=W_work[:,i1:i2].clone();Hinv_block=Hinv[i1:i2,i1:i2];Err=torch.zeros(rows,i2-i1) + for j in range(i2-i1):w_col=W_block[:,j];d=Hinv_block[j,j];q_col=torch.clamp(torch.round(w_col/sf),-clip_range,clip_range);Q[:,i1+j]=q_col.to(torch.int8);err=(w_col-q_col.float()*sf)/d;Err[:,j]=err;W_block[:,j:]-=err.unsqueeze(1)*Hinv_block[j,j:].unsqueeze(0) + if i20:out[name]=(q.float()*s.float().view(q.shape[0],*[1]*(q.ndim-1))).to(orig_dtype) + else:out[name]=(q.float()*float(s.item())).to(orig_dtype) + return out +_BSHF_MAGIC=b'BSHF' +def _byte_shuffle(data,stride=2): + if stride<=1 or len(data)0: + base_model.train();chunk_seqs=(chunk_end-chunk_start)//seq_len + if chunk_seqs>0: + cos_lr=h.ttt_lr*.5*(1.+math.cos(math.pi*ci/max(num_chunks-1,1))) + for pg in optimizer.param_groups:pg['lr']=cos_lr + my_seq_s=chunk_seqs*rank//world_size;my_seq_e=chunk_seqs*(rank+1)//world_size;my_chunk_seqs=my_seq_e-my_seq_s + for _ep in range(h.ttt_epochs): + for bs in range(0,my_chunk_seqs,batch_seqs): + be=min(bs+batch_seqs,my_chunk_seqs);actual_bs=my_seq_s+bs;start_tok=chunk_start+actual_bs*seq_len;end_tok=chunk_start+(my_seq_s+be)*seq_len+1 + if end_tok>val_data.val_tokens.numel():continue + local=val_data.val_tokens[start_tok:end_tok].to(device=device,dtype=torch.int64);x=local[:-1].reshape(-1,seq_len);y=local[1:].reshape(-1,seq_len);optimizer.zero_grad(set_to_none=True) + with torch.autocast(device_type='cuda',dtype=torch.bfloat16):loss=base_model(x,y) + loss.backward() + if world_size>1: + for p in ttt_params: + if p.grad is not None:dist.all_reduce(p.grad,op=dist.ReduceOp.AVG) + torch.nn.utils.clip_grad_norm_(ttt_params,1.);optimizer.step() + if dist.is_available()and dist.is_initialized():dist.all_reduce(loss_sum,op=dist.ReduceOp.SUM);dist.all_reduce(token_count,op=dist.ReduceOp.SUM);dist.all_reduce(byte_count,op=dist.ReduceOp.SUM) + for p in base_model.parameters():p.requires_grad_(True) + base_model.eval();return _loss_bpb(loss_sum,token_count,byte_count) +def timed_eval(label,fn,*args,**kwargs):torch.cuda.synchronize();t0=time.perf_counter();val_loss,val_bpb=fn(*args,**kwargs);torch.cuda.synchronize();elapsed_ms=1e3*(time.perf_counter()-t0);log(f"{label} val_loss:{val_loss:.8f} val_bpb:{val_bpb:.8f} eval_time:{elapsed_ms:.0f}ms");return val_loss,val_bpb +def train_model(h,device,val_data): + base_model=GPT(h).to(device).bfloat16();restore_fp32_params(base_model);compiled_model=torch.compile(base_model,dynamic=False,fullgraph=True) + if h.distributed:model=DDP(compiled_model,device_ids=[h.local_rank],broadcast_buffers=False) + else:model=compiled_model + log(f"model_params:{sum(p.numel()for p in base_model.parameters())}");optimizers=Optimizers(h,base_model);train_loader=ShuffledSequenceLoader(h,device);max_wallclock_ms=1e3*h.max_wallclock_seconds if h.max_wallclock_seconds>0 else None + if max_wallclock_ms is not None:max_wallclock_ms-=h.gptq_reserve_seconds*1e3;log(f"gptq:reserving {h.gptq_reserve_seconds:.0f}s, effective={max_wallclock_ms:.0f}ms") + def training_frac(step,elapsed_ms): + if max_wallclock_ms is None:return step/max(h.iterations,1) + return elapsed_ms/max(max_wallclock_ms,1e-09) + def lr_mul(frac): + if h.warmdown_frac<=0:return 1. + if frac>=1.-h.warmdown_frac:return max((1.-frac)/h.warmdown_frac,h.min_lr) + return 1. + def step_fn(step,lr_scale): + optimizers.zero_grad_all();train_loss=torch.zeros((),device=device) + for micro_step in range(h.grad_accum_steps): + if h.distributed:model.require_backward_grad_sync=micro_step==h.grad_accum_steps-1 + x,y=train_loader.next_batch(h.train_batch_tokens,h.grad_accum_steps) + with torch.autocast(device_type='cuda',dtype=torch.bfloat16,enabled=True):loss=model(x,y) + train_loss+=loss.detach();(loss/h.grad_accum_steps).backward() + train_loss/=h.grad_accum_steps;frac=min(step/h.muon_momentum_warmup_steps,1.)if h.muon_momentum_warmup_steps>0 else 1.;muon_momentum=(1-frac)*h.muon_momentum_warmup_start+frac*h.muon_momentum + for group in optimizers.optimizer_muon.param_groups:group['momentum']=muon_momentum + for opt in optimizers: + for group in opt.param_groups:group['lr']=group['base_lr']*lr_scale + if h.grad_clip_norm>0:torch.nn.utils.clip_grad_norm_(base_model.parameters(),h.grad_clip_norm) + optimizers.step();return train_loss + if h.warmup_steps>0: + initial_model_state={name:tensor.detach().cpu().clone()for(name,tensor)in base_model.state_dict().items()};initial_optimizer_states=[copy.deepcopy(opt.state_dict())for opt in optimizers];model.train() + for warmup_step in range(h.warmup_steps): + step_fn(warmup_step,1.) + if warmup_step<=5 or(warmup_step+1)%10==0 or warmup_step+1==h.warmup_steps:log(f"warmup_step: {warmup_step+1}/{h.warmup_steps}") + if h.num_loops>0: + base_model.looping_active=True;log(f"loop_warmup:enabled encoder:{base_model.encoder_indices} decoder:{base_model.decoder_indices}") + for warmup_step in range(h.warmup_steps): + step_fn(warmup_step,1.) + if warmup_step<=5 or(warmup_step+1)%10==0 or warmup_step+1==h.warmup_steps:log(f"loop_warmup_step: {warmup_step+1}/{h.warmup_steps}") + base_model.looping_active=False + base_model.load_state_dict(initial_model_state,strict=True) + for(opt,state)in zip(optimizers,initial_optimizer_states):opt.load_state_dict(state) + optimizers.zero_grad_all() + if h.distributed:model.require_backward_grad_sync=True + train_loader=ShuffledSequenceLoader(h,device) + ema_state={name:t.detach().float().clone()for(name,t)in base_model.state_dict().items()};ema_decay=h.ema_decay;training_time_ms=.0;stop_after_step=None;torch.cuda.synchronize();t0=time.perf_counter();step=0 + while True: + last_step=step==h.iterations or stop_after_step is not None and step>=stop_after_step;should_validate=last_step or h.val_loss_every>0 and step%h.val_loss_every==0 + if should_validate:torch.cuda.synchronize();training_time_ms+=1e3*(time.perf_counter()-t0);val_loss,val_bpb=eval_val(h,device,val_data,model);log(f"{step}/{h.iterations} val_loss: {val_loss:.4f} val_bpb: {val_bpb:.4f}");torch.cuda.synchronize();t0=time.perf_counter() + if last_step: + if stop_after_step is not None and step0 and not base_model.looping_active and frac>=h.enable_looping_at:base_model.looping_active=True;log(f"layer_loop:enabled step:{step} frac:{frac:.3f} encoder:{base_model.encoder_indices} decoder:{base_model.decoder_indices}") + train_loss=step_fn(step,scale) + with torch.no_grad(): + for(name,t)in base_model.state_dict().items():ema_state[name].mul_(ema_decay).add_(t.detach().float(),alpha=1.-ema_decay) + step+=1;approx_training_time_ms=training_time_ms+1e3*(time.perf_counter()-t0);should_log_train=h.train_log_every>0 and(step<=5 or step%h.train_log_every==0 or stop_after_step is not None) + if should_log_train:tok_per_sec=step*h.train_batch_tokens/(approx_training_time_ms/1e3);log(f"{step}/{h.iterations} train_loss: {train_loss.item():.4f} train_time: {approx_training_time_ms/60000:.1f}m tok/s: {tok_per_sec:.0f}") + reached_cap=max_wallclock_ms is not None and approx_training_time_ms>=max_wallclock_ms + if h.distributed and max_wallclock_ms is not None:reached_cap_tensor=torch.tensor(int(reached_cap),device=device);dist.all_reduce(reached_cap_tensor,op=dist.ReduceOp.MAX);reached_cap=bool(reached_cap_tensor.item()) + if stop_after_step is None and reached_cap:stop_after_step=step + log(f"peak memory allocated: {torch.cuda.max_memory_allocated()//1024//1024} MiB reserved: {torch.cuda.max_memory_reserved()//1024//1024} MiB");log('ema:applying EMA weights');current_state=base_model.state_dict();avg_state={name:t.to(dtype=current_state[name].dtype)for(name,t)in ema_state.items()};base_model.load_state_dict(avg_state,strict=True);return base_model,compiled_model +def train_and_eval(h,device): + random.seed(h.seed);np.random.seed(h.seed);torch.manual_seed(h.seed);torch.cuda.manual_seed_all(h.seed);val_data=ValidationData(h,device);_ts=len(list(Path(h.datasets_dir).resolve().glob('fineweb_train_*.bin')));log('train_shards: '+str(_ts));log('val_tokens: '+str(val_data.val_tokens.numel()-1));base_model,compiled_model=train_model(h,device,val_data);torch._dynamo.reset();timed_eval('pre-quantization post-ema',eval_val,h,device,val_data,compiled_model);serialize(h,base_model,Path(__file__).read_text(encoding='utf-8')) + if h.distributed:dist.barrier() + eval_model=deserialize(h,device) + if h.num_loops>0:eval_model.looping_active=True + compiled_model=torch.compile(eval_model,dynamic=False,fullgraph=True);timed_eval('quantized',eval_val,h,device,val_data,compiled_model) + if h.sliding_window_enabled:timed_eval('quantized_sliding_window',eval_val_sliding,h,device,val_data,eval_model) + if h.ttt_enabled and h.sliding_window_enabled: + del eval_model,compiled_model;torch._dynamo.reset();torch.cuda.empty_cache();ttt_model=deserialize(h,device) + if h.num_loops>0:ttt_model.looping_active=True + timed_eval('quantized_ttt',eval_val_ttt,h,device,val_data,ttt_model);del ttt_model + if h.etlb_enabled and h.sliding_window_enabled: + if'eval_model'not in dir(): + eval_model=deserialize(h,device) + if h.num_loops>0:eval_model.looping_active=True + timed_eval('quantized_sliding_etlb',eval_val_sliding_etlb,h,device,val_data,eval_model) +def main(): + world_size=int(os.environ.get('WORLD_SIZE','1'));local_rank=int(os.environ.get('LOCAL_RANK','0'));distributed='RANK'in os.environ and'WORLD_SIZE'in os.environ + if not torch.cuda.is_available():raise RuntimeError('CUDA is required') + if world_size<=0:raise ValueError(f"WORLD_SIZE must be positive, got {world_size}") + if 8%world_size!=0:raise ValueError(f"WORLD_SIZE={world_size} must divide 8 so grad_accum_steps stays integral") + device=torch.device('cuda',local_rank);torch.cuda.set_device(device) + if distributed:dist.init_process_group(backend='nccl',device_id=device);dist.barrier() + torch.backends.cuda.matmul.allow_tf32=True;torch.backends.cudnn.allow_tf32=True;torch.set_float32_matmul_precision('high');from torch.backends.cuda import enable_cudnn_sdp,enable_flash_sdp,enable_math_sdp,enable_mem_efficient_sdp;enable_cudnn_sdp(False);enable_flash_sdp(True);enable_mem_efficient_sdp(False);enable_math_sdp(False);torch._dynamo.config.optimize_ddp=False;h=Hyperparameters();set_logging_hparams(h) + if h.is_main_process: + os.makedirs('logs',exist_ok=True);log(100*'=',console=False);log('Hyperparameters:',console=True) + for(k,v)in sorted(vars(type(h)).items()): + if not k.startswith('_'):log(f" {k}: {v}",console=True) + log('='*100,console=False);log(f"Running Python {sys.version}",console=False);log(f"Running PyTorch {torch.__version__}",console=False);log(subprocess.run(['nvidia-smi'],stdout=subprocess.PIPE,stderr=subprocess.PIPE,text=True,check=False).stdout,console=False);log('='*100,console=False) + train_and_eval(h,device) + if distributed:dist.destroy_process_group() +if __name__=='__main__':main() \ No newline at end of file diff --git a/records/track_non_record_16mb/2026-04-11_SP8192_SOTA_QK525_TTT_1.0704_1xA100/train_seed1337.log b/records/track_non_record_16mb/2026-04-11_SP8192_SOTA_QK525_TTT_1.0704_1xA100/train_seed1337.log new file mode 100644 index 0000000000..5dfcbb4906 --- /dev/null +++ b/records/track_non_record_16mb/2026-04-11_SP8192_SOTA_QK525_TTT_1.0704_1xA100/train_seed1337.log @@ -0,0 +1,169 @@ +SP8192 data ready: 128 train shards, 1 val shards + +note: This is an issue with the package mentioned above, not pip. +hint: See above for details. +Hyperparameters: + adam_eps: 1e-08 + adam_wd: 0.02 + beta1: 0.9 + beta2: 0.95 + compressor: brotli + data_dir: /ibex/project/c2334/huanyi/parameter-golf/data/ + datasets_dir: /ibex/project/c2334/huanyi/parameter-golf/data/datasets/fineweb10B_sp8192 + distributed: True + ema_decay: 0.9965 + embed_bits: 8 + embed_clip_sigmas: 20.0 + embed_lr: 0.6 + embed_wd: 0.085 + embedding_dim: 512 + enable_looping_at: 0.35 + etlb_clip: 3.0 + etlb_enabled: False + etlb_lr: 0.05 + etlb_steps: 5 + eval_seq_len: 2048 + eval_stride: 64 + gptq_calibration_batches: 64 + gptq_reserve_seconds: 12.0 + grad_accum_steps: 8 + grad_clip_norm: 0.3 + head_lr: 0.008 + is_main_process: True + iterations: 20000 + ln_scale: True + local_rank: 0 + logfile: logs/v6_exp62_sota_qk525.txt + logit_softcap: 30.0 + loop_end: 5 + loop_start: 3 + matrix_bits: 6 + matrix_clip_sigmas: 12.85 + matrix_lr: 0.022 + max_wallclock_seconds: 14400.0 + min_lr: 0.0 + mlp_mult: 4.0 + model_dim: 512 + model_path: final_model.pt + muon_backend_steps: 5 + muon_beta2: 0.95 + muon_momentum: 0.99 + muon_momentum_warmup_start: 0.92 + muon_momentum_warmup_steps: 1500 + muon_row_normalize: True + muon_wd: 0.095 + num_heads: 8 + num_kv_heads: 4 + num_layers: 11 + num_loops: 2 + parallel_residual_start: 7 + qk_gain_init: 5.25 + quantized_model_path: final_model.int6.ptz + rank: 0 + rope_base: 10000.0 + rope_dims: 16 + rope_train_seq_len: 2048 + run_id: v6_exp62_sota_qk525 + scalar_lr: 0.02 + seed: 1337 + skip_gates_enabled: True + sliding_window_enabled: True + tie_embeddings: True + tied_embed_init_std: 0.005 + tied_embed_lr: 0.03 + tokenizer_path: /ibex/project/c2334/huanyi/parameter-golf/data/tokenizers/fineweb_8192_bpe.model + train_batch_tokens: 786432 + train_files: /ibex/project/c2334/huanyi/parameter-golf/data/datasets/fineweb10B_sp8192/fineweb_train_*.bin + train_log_every: 200 + train_seq_len: 2048 + ttt_chunk_tokens: 32768 + ttt_enabled: True + ttt_epochs: 3 + ttt_lr: 0.005 + ttt_momentum: 0.9 + val_batch_tokens: 524288 + val_files: /ibex/project/c2334/huanyi/parameter-golf/data/datasets/fineweb10B_sp8192/fineweb_val_*.bin + val_loss_every: 2000 + vocab_size: 8192 + warmdown_frac: 0.72 + warmup_steps: 20 + world_size: 1 + xsa_last_n: 11 +train_shards: 128 +val_tokens: 40546304 +model_params:35944536 +gptq:reserving 12s, effective=14388000ms +warmup_step: 1/20 +warmup_step: 2/20 +warmup_step: 3/20 +warmup_step: 4/20 +warmup_step: 5/20 +warmup_step: 6/20 +warmup_step: 10/20 +warmup_step: 20/20 +loop_warmup:enabled encoder:[0, 1, 2, 3, 4, 5, 3, 4] decoder:[5, 3, 4, 5, 6, 7, 8, 9, 10] +loop_warmup_step: 1/20 +loop_warmup_step: 2/20 +loop_warmup_step: 3/20 +loop_warmup_step: 4/20 +loop_warmup_step: 5/20 +loop_warmup_step: 6/20 +loop_warmup_step: 10/20 +loop_warmup_step: 20/20 +0/20000 val_loss: 9.0072 val_bpb: 3.4876 +1/20000 train_loss: 9.0073 train_time: 0.0m tok/s: 454362 +2/20000 train_loss: 12.2614 train_time: 0.1m tok/s: 450374 +3/20000 train_loss: 11.0707 train_time: 0.1m tok/s: 448789 +4/20000 train_loss: 9.4383 train_time: 0.1m tok/s: 447753 +5/20000 train_loss: 8.3141 train_time: 0.1m tok/s: 447202 +200/20000 train_loss: 3.6622 train_time: 5.9m tok/s: 442719 +400/20000 train_loss: 3.3864 train_time: 11.8m tok/s: 443116 +600/20000 train_loss: 3.3138 train_time: 17.7m tok/s: 443407 +800/20000 train_loss: 3.2536 train_time: 23.6m tok/s: 443604 +1000/20000 train_loss: 3.2125 train_time: 29.5m tok/s: 443717 +1200/20000 train_loss: 3.1716 train_time: 35.4m tok/s: 443780 +1400/20000 train_loss: 3.1705 train_time: 41.3m tok/s: 443811 +1600/20000 train_loss: 3.1825 train_time: 47.3m tok/s: 443812 +1800/20000 train_loss: 3.1075 train_time: 53.2m tok/s: 443791 +2000/20000 train_loss: 3.1216 train_time: 59.1m tok/s: 443767 +2000/20000 val_loss: 3.0966 val_bpb: 1.1990 +2200/20000 train_loss: 3.0967 train_time: 65.0m tok/s: 443740 +2400/20000 train_loss: 3.0812 train_time: 70.9m tok/s: 443717 +2600/20000 train_loss: 3.1018 train_time: 76.8m tok/s: 443698 +2800/20000 train_loss: 3.0634 train_time: 82.7m tok/s: 443679 +layer_loop:enabled step:2842 frac:0.350 encoder:[0, 1, 2, 3, 4, 5, 3, 4] decoder:[5, 3, 4, 5, 6, 7, 8, 9, 10] +3000/20000 train_loss: 3.0332 train_time: 90.9m tok/s: 432389 +3200/20000 train_loss: 3.0379 train_time: 99.8m tok/s: 420365 +3400/20000 train_loss: 2.9864 train_time: 108.6m tok/s: 410274 +3600/20000 train_loss: 2.9870 train_time: 117.5m tok/s: 401733 +3800/20000 train_loss: 2.9655 train_time: 126.3m tok/s: 394378 +4000/20000 train_loss: 2.9861 train_time: 135.1m tok/s: 387995 +4000/20000 val_loss: 2.9654 val_bpb: 1.1482 +4200/20000 train_loss: 2.9799 train_time: 144.0m tok/s: 382392 +4400/20000 train_loss: 2.9577 train_time: 152.8m tok/s: 377439 +4600/20000 train_loss: 2.9265 train_time: 161.6m tok/s: 373011 +4800/20000 train_loss: 2.9243 train_time: 170.5m tok/s: 369061 +5000/20000 train_loss: 2.8990 train_time: 179.3m tok/s: 365502 +5200/20000 train_loss: 2.8885 train_time: 188.1m tok/s: 362273 +5400/20000 train_loss: 2.9150 train_time: 197.0m tok/s: 359341 +5600/20000 train_loss: 2.8177 train_time: 205.8m tok/s: 356663 +5800/20000 train_loss: 2.8467 train_time: 214.6m tok/s: 354200 +6000/20000 train_loss: 2.8160 train_time: 223.5m tok/s: 351940 +6000/20000 val_loss: 2.8125 val_bpb: 1.0890 +6200/20000 train_loss: 2.7801 train_time: 232.3m tok/s: 349855 +6371/20000 val_loss: 2.7826 val_bpb: 1.0774 +stopping_early: wallclock_cap train_time: 14389478ms step: 6371/20000 +peak memory allocated: 41862 MiB reserved: 41930 MiB +ema:applying EMA weights +pre-quantization post-ema val_loss:2.77918548 val_bpb:1.07609792 eval_time:48098ms +Serialized model: 135431033 bytes +Code size: 49104 bytes +GPTQ:collecting Hessians from calibration data... +GPTQ:collected 67 Hessians in 27.1s +Quantized weights: + gptq (int6): blocks.attn.c_k.weight, blocks.attn.c_q.weight, blocks.attn.c_v.weight, blocks.attn.proj.weight, blocks.mlp.fc.weight, blocks.mlp.proj.weight + gptq (int8): tok_emb.weight + passthrough (float16): blocks.attn.q_gain, blocks.attn_scale, blocks.mlp_scale, blocks.resid_mix, skip_gates, skip_weights +Serialized model quantized+brotli: 15970123 bytes +Total submission size quantized+brotli: 16019227 bytes +EXIT_CODE=1