Skip to content

Commit 49d1068

Browse files
Itssshikharclaude
andcommitted
Record real wd_paired result: q_ttt=1.07974 + add safe_launch guard
The prior session's "wd_paired 1.08009" was a no-op (FS turbulence rolled train_pr1493.py back to the pre-stacking version mid-session; log had zero hits on tagged=22). This commit records the first real wd_paired run on commit 74dc702. Result (seed 42, real paired-head Muon firing, tagged=22 confirmed on all 8 ranks): q_ttt 1.07974, q_sw 1.08209, q 1.09891, pre 1.08610. Stack is -0.00129 BPB vs raw PR1493, -0.00055 vs wd alone. Also: - safe_launch.sh: pre-launch guard that asserts HEAD/md5/git-blob/ symbol-count/working-tree-clean before exec'ing torchrun. - requirements.txt: add brotli (was implicit via Modal image only; attempt 1 of this run crashed in serialize() because local env lacked it). - pr1493_priority_results.md: append wd_paired row (openai#6). - pr1493_wd_paired_session.md: full session writeup (verification methodology, two attempts, learnings, plan-doc vs reality). - logs/pr1493_wd_paired_s42.{txt,stdout}: real run log + stdout. - logs/pr1493_wd_paired_s42.{txt,stdout}.attempt1_brotli_crash: preserved evidence of the brotli ModuleNotFoundError crash. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
1 parent 74dc702 commit 49d1068

8 files changed

Lines changed: 1134 additions & 1 deletion

logs/pr1493_wd_paired_s42.stdout

Lines changed: 166 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,166 @@
1+
[safe_launch] OK ts=20260429T095343Z head=74dc7028a06a0f52e2ce23a925ef24404e93ca1b md5=968e5ab744772b096a8f9b521656019d blob=1e4f7b4391f9a82b0ca7f735bbbb0db6eea8e8ad symbols=6
2+
[safe_launch] exec: torchrun --standalone --nproc_per_node=8 train_pr1493.py
3+
W0429 09:53:45.032000 35682 torch/distributed/run.py:803]
4+
W0429 09:53:45.032000 35682 torch/distributed/run.py:803] *****************************************
5+
W0429 09:53:45.032000 35682 torch/distributed/run.py:803] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
6+
W0429 09:53:45.032000 35682 torch/distributed/run.py:803] *****************************************
7+
Hyperparameters:
8+
adam_eps: 1e-08
9+
adam_wd: 0.02
10+
beta1: 0.9
11+
beta2: 0.95
12+
compressor: brotli
13+
data_dir: ./data/
14+
datasets_dir: ./data/datasets/fineweb10B_sp8192
15+
distributed: True
16+
doc_shuffle_bos_id: -1
17+
doc_shuffle_enabled: False
18+
ema_decay: 0.9965
19+
embed_bits: 8
20+
embed_clip_sigmas: 20.0
21+
embed_lr: 0.6
22+
embed_wd: 0.085
23+
embedding_dim: 512
24+
enable_looping_at: 0.35
25+
etlb_clip: 3.0
26+
etlb_enabled: False
27+
etlb_lr: 0.05
28+
etlb_steps: 5
29+
eval_num_loops: -1
30+
eval_seq_len: 2048
31+
eval_stride: 64
32+
gptq_calibration_batches: 64
33+
gptq_reserve_seconds: 12.0
34+
grad_accum_steps: 1
35+
grad_clip_norm: 0.3
36+
head_lr: 0.008
37+
iha_enabled: False
38+
iha_mix_v: False
39+
is_main_process: True
40+
iterations: 20000
41+
ln_scale: True
42+
local_rank: 0
43+
logfile: logs/pr1493_wd_paired_s42.txt
44+
logit_softcap: 30.0
45+
loop_end: 5
46+
loop_start: 3
47+
matrix_bits: 6
48+
matrix_clip_sigmas: 12.85
49+
matrix_lr: 0.022
50+
max_wallclock_seconds: 600.0
51+
min_lr: 0.0
52+
mlp_mult: 4.0
53+
model_dim: 512
54+
model_path: final_model.pt
55+
mtp_steps: 0
56+
mtp_weight: 0.0
57+
muon_backend_steps: 5
58+
muon_beta2: 0.95
59+
muon_momentum: 0.99
60+
muon_momentum_warmup_start: 0.92
61+
muon_momentum_warmup_steps: 1500
62+
muon_row_normalize: True
63+
muon_wd: 0.095
64+
num_heads: 8
65+
num_kv_heads: 4
66+
num_layers: 11
67+
num_loops: 2
68+
paired_head_muon_enabled: True
69+
parallel_residual_start: 7
70+
qk_gain_init: 5.25
71+
quantized_model_path: final_model.int6.ptz
72+
rank: 0
73+
rope_base: 10000.0
74+
rope_dims: 16
75+
rope_train_seq_len: 2048
76+
run_id: pr1493_wd_paired_s42
77+
scalar_lr: 0.02
78+
seed: 42
79+
skip_gates_enabled: True
80+
sliding_window_enabled: True
81+
tie_embeddings: True
82+
tied_embed_init_std: 0.005
83+
tied_embed_lr: 0.03
84+
tokenizer_path: ./data/tokenizers/fineweb_8192_bpe.model
85+
train_batch_tokens: 786432
86+
train_files: ./data/datasets/fineweb10B_sp8192/fineweb_train_*.bin
87+
train_log_every: 500
88+
train_seq_len: 2048
89+
ttt_chunk_tokens: 32768
90+
ttt_enabled: True
91+
ttt_epochs: 5
92+
ttt_lr: 0.007
93+
ttt_momentum: 0.9
94+
val_batch_tokens: 524288
95+
val_files: ./data/datasets/fineweb10B_sp8192/fineweb_val_*.bin
96+
val_loss_every: 4000
97+
vocab_size: 8192
98+
warmdown_frac: 0.72
99+
warmup_steps: 20
100+
wd_sched_high_factor: 1.5
101+
wd_sched_hold_frac: 0.4
102+
wd_sched_low_factor: 0.65
103+
wd_sched_ramp_frac: 0.85
104+
wd_schedule_enabled: True
105+
world_size: 8
106+
xsa_last_n: 11
107+
train_shards: 128
108+
val_tokens: 40540160
109+
model_params:35944536
110+
muon:paired-head NS enabled for q/k matrices tagged=22
111+
gptq:reserving 12s, effective=588000ms
112+
warmup_step: 1/20
113+
warmup_step: 2/20
114+
warmup_step: 3/20
115+
warmup_step: 4/20
116+
warmup_step: 5/20
117+
warmup_step: 6/20
118+
warmup_step: 10/20
119+
warmup_step: 20/20
120+
loop_warmup:enabled encoder:[0, 1, 2, 3, 4, 5, 3, 4] decoder:[5, 3, 4, 5, 6, 7, 8, 9, 10]
121+
loop_warmup_step: 1/20
122+
loop_warmup_step: 2/20
123+
loop_warmup_step: 3/20
124+
loop_warmup_step: 4/20
125+
loop_warmup_step: 5/20
126+
loop_warmup_step: 6/20
127+
loop_warmup_step: 10/20
128+
loop_warmup_step: 20/20
129+
0/20000 val_loss: 9.0090 val_bpb: 3.4877
130+
1/20000 train_loss: 9.0111 train_time: 0.0m tok/s: 8357493
131+
2/20000 train_loss: 12.3696 train_time: 0.0m tok/s: 8224856
132+
3/20000 train_loss: 11.1322 train_time: 0.0m tok/s: 8106486
133+
4/20000 train_loss: 9.4108 train_time: 0.0m tok/s: 8044845
134+
5/20000 train_loss: 8.3268 train_time: 0.0m tok/s: 8014159
135+
500/20000 train_loss: 3.3250 train_time: 0.8m tok/s: 7767075
136+
1000/20000 train_loss: 3.1904 train_time: 1.7m tok/s: 7753753
137+
1500/20000 train_loss: 3.1009 train_time: 2.5m tok/s: 7758001
138+
2000/20000 train_loss: 3.0633 train_time: 3.4m tok/s: 7762917
139+
layer_loop:enabled step:2032 frac:0.350 encoder:[0, 1, 2, 3, 4, 5, 3, 4] decoder:[5, 3, 4, 5, 6, 7, 8, 9, 10]
140+
2500/20000 train_loss: 3.0652 train_time: 4.6m tok/s: 7136927
141+
3000/20000 train_loss: 2.9412 train_time: 5.8m tok/s: 6743041
142+
3500/20000 train_loss: 2.9616 train_time: 7.1m tok/s: 6471277
143+
4000/20000 train_loss: 2.9060 train_time: 8.3m tok/s: 6295682
144+
4000/20000 val_loss: 2.8659 val_bpb: 1.1095
145+
4500/20000 train_loss: 2.7602 train_time: 9.6m tok/s: 6166690
146+
4596/20000 val_loss: 2.8082 val_bpb: 1.0871
147+
stopping_early: wallclock_cap train_time: 588152ms step: 4596/20000
148+
peak memory allocated: 39046 MiB reserved: 39070 MiB
149+
ema:applying EMA weights
150+
pre-quantization post-ema val_loss:2.80550662 val_bpb:1.08609904 eval_time:6986ms
151+
Serialized model: 135431033 bytes
152+
Code size: 57003 bytes
153+
GPTQ:collecting Hessians from calibration data...
154+
GPTQ:collected 67 Hessians in 12.8s
155+
Quantized weights:
156+
gptq (int6): blocks.attn.c_k.weight, blocks.attn.c_q.weight, blocks.attn.c_v.weight, blocks.attn.proj.weight, blocks.mlp.fc.weight, blocks.mlp.proj.weight
157+
gptq (int8): tok_emb.weight
158+
passthrough (float16): blocks.attn.q_gain, blocks.attn_scale, blocks.mlp_scale, blocks.resid_mix, skip_gates, skip_weights
159+
Serialized model quantized+brotli: 15972921 bytes
160+
Total submission size quantized+brotli: 16029924 bytes
161+
quantized:eval_num_loops=2 looping_active=True encoder:[0, 1, 2, 3, 4, 5, 3, 4] decoder:[5, 3, 4, 5, 6, 7, 8, 9, 10]
162+
quantized val_loss:2.83861053 val_bpb:1.09891460 eval_time:24849ms
163+
quantized_sliding_window val_loss:2.79516170 val_bpb:1.08209420 eval_time:121664ms
164+
ttt:eval_num_loops=2 looping_active=True encoder:[0, 1, 2, 3, 4, 5, 3, 4] decoder:[5, 3, 4, 5, 6, 7, 8, 9, 10]
165+
ttt:start chunks=1238 ttt_lr=0.007 ttt_epochs=5
166+
quantized_ttt val_loss:2.78909020 val_bpb:1.07974373 eval_time:514291ms

0 commit comments

Comments
 (0)