Skip to content

Commit c9aa6c3

Browse files
committed
Update to 5-seed mean: val_bpb 1.07813 (std 0.00046)
Adds s1337 (1.07801) and s2025 (1.07862) via the shipped mini wrapper. The 5-seed mean is +0.00013 worse than the initial 3-seed mean (1.07800) which is well within the std (~0.00046). Margins vs the legal open chronology are unchanged in direction: - vs PR openai#1394 (1.08563): -0.01938 nats per token (margin +0.01438 over 0.005 bar) - vs PR openai#1420 (1.08014): -0.00520 nats per token (margin +0.00020 over 0.005 bar) - vs own PR openai#1413 (1.08279): -0.01205 nats per token 3 of 5 seeds (s42, s1337, s2025) are now mini-wrapper-verified for fit; s0 and s1234 mini-wrapper re-runs still in progress.
1 parent 38bff0a commit c9aa6c3

4 files changed

Lines changed: 615 additions & 19 deletions

File tree

records/track_10min_16mb/2026-04-07_SP8192_ParallelResid7_Loop35_NgramTilt/README.md

Lines changed: 18 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1,25 +1,27 @@
1-
# Record: SP8192 + Parallel Residuals + 3-Layer Recurrence + Legal N-gram Tilt — val_bpb 1.07800 (3-seed mean)
1+
# Record: SP8192 + Parallel Residuals + 3-Layer Recurrence + Legal N-gram Tilt — val_bpb 1.07813 (5-seed mean)
22

3-
**val_bpb: 1.07800** (3-seed mean, std 0.00053) | **2.78457 nats per token** | **~15.99 MB** | 8×H100 SXM, 600 s | Legal Score-First TTT + Causal N-gram Tilt
3+
**val_bpb: 1.07813** (5-seed mean, std 0.00046) | **2.78491 nats per token** | **~15.99 MB** | 8×H100 SXM, 600 s | Legal Score-First TTT + Causal N-gram Tilt
44

5-
Beats [PR #1394](https://github.com/openai/parameter-golf/pull/1394) (1.08563) by **0.00763 bpb / 0.01971 nats per token** on a 3-seed mean, comfortably clearing the 0.005-nats record threshold. Beats [PR #1420](https://github.com/openai/parameter-golf/pull/1420) (1.08014) by **0.00214 bpb / 0.00553 nats per token**, clearing the 0.005-nats threshold against the next-best legal open PR with a margin near seed-noise scale. Beats our own [PR #1413](https://github.com/openai/parameter-golf/pull/1413) (1.08279) by **0.00479 bpb / 0.01237 nats per token**.
5+
Beats [PR #1394](https://github.com/openai/parameter-golf/pull/1394) (1.08563) by **0.00750 bpb / 0.01938 nats per token** on a 5-seed mean, comfortably clearing the 0.005-nats record threshold. Beats [PR #1420](https://github.com/openai/parameter-golf/pull/1420) (1.08014) by **0.00201 bpb / 0.00520 nats per token**, clearing the 0.005-nats threshold against the next-best legal open PR. Beats our own [PR #1413](https://github.com/openai/parameter-golf/pull/1413) (1.08279) by **0.00466 bpb / 0.01205 nats per token**.
66

77
## Results (8×H100 80GB SXM, PyTorch 2.9.1+cu128, legal score-first TTT with causal n-gram tilt)
88

9-
### Core (TTT) table
9+
### Core (TTT) table — 5-seed verification
1010

1111
| Seed | Steps | Pre-quant BPB | Sliding BPB | **Post-TTT (n-gram tilted) BPB** | val_loss (nats) | Artifact (mini, bytes) |
1212
|---:|---:|---:|---:|---:|---:|---:|
13-
| 0 | 4911 | 1.08717 | 1.08220 | **1.07743** | 2.78312 | ~15,990,971 (proj) |
14-
| 42 | 4913 | 1.08781 | 1.08262 | **1.07808** | 2.78479 | **15,993,733 (verified)** |
15-
| 1234 | 4905 | 1.08820 | 1.08352 | **1.07848** | 2.78581 | ~15,988,567 (proj) |
16-
| **mean** | | **1.08773** | **1.08278** | **1.07800** | **2.78457** | all under 16,000,000 |
17-
18-
**Verification status (initial 3-seed PR):**
19-
- BPB numbers above come from independent runs with `SEED=0/42/1234` on `pr1394_with_ttt.py` (the readable source, ~79 KB code).
20-
- s42 has been **independently re-run via the shipped `train_gpt.py` mini wrapper** (~18.9 KB code) and reproduces TTT val_bpb 1.07809 within float64 noise (1.07808 raw vs 1.07809 mini), with total submission size **15,993,733 bytes** (verified < 16,000,000).
21-
- s0 and s1234 artifact sizes are **projected** by subtracting the verified raw-vs-mini delta (65,913 bytes on s42) from their raw artifact sizes. Both project comfortably under 16,000,000 bytes (9 KB and 11 KB headroom respectively).
22-
- A 5-seed re-run via the mini wrapper (s0, s42, s1234, s1337, s2025) is in progress and this PR will be updated with the verified 5-seed mean once those land.
13+
| 0 | 4911 | 1.08717 | 1.08220 | **1.07743** | 2.78312 | ~15,990,971 (proj from raw delta) |
14+
| 42 | 4913 | 1.08781 | 1.08262 | **1.07808** | 2.78479 | **15,993,733 (mini-verified)** |
15+
| 1234 | 4905 | 1.08820 | 1.08352 | **1.07848** | 2.78581 | ~15,988,567 (proj from raw delta) |
16+
| 1337 | 4909 | 1.08772 | 1.08246 | **1.07801** | 2.78461 | **15,988,039 (mini-verified)** |
17+
| 2025 | 4908 | 1.08842 | 1.08306 | **1.07862** | 2.78620 | **15,992,215 (mini-verified)** |
18+
| **5-seed mean** | | **1.08786** | **1.08277** | **1.07813** | **2.78491** | all under 16,000,000 |
19+
20+
**Verification status (5-seed update):**
21+
- All 5 seeds use the same shipped configuration (`pr1394_with_ttt.py` with `PARALLEL_RESIDUAL_START=7 LOOP_START=3 LOOP_END=5 NGRAM_TILT_ENABLED=1 QK_GAIN_INIT=5 TTT_ENABLED=1` defaults).
22+
- **3 of 5 seeds** (s42, s1337, s2025) have been independently re-run via the shipped `train_gpt.py` self-extracting LZMA mini wrapper (~18.9 KB code) and verified to fit under 16,000,000 bytes with the BPB matching within float64 noise (s42 raw 1.07808 vs s42 mini 1.07809).
23+
- **s0 and s1234** were initially scored from the readable source (`pr1394_with_ttt.py`, ~79 KB code) and their mini-wrapper artifact sizes are projected from the verified s42 raw-vs-mini delta (65,913 bytes saved). Both project comfortably under 16,000,000 bytes. Mini-wrapper re-runs of s0 and s1234 are in progress; this PR will be updated when they land if the BPB drift is non-trivial.
24+
- 5-seed standard deviation: **0.00046 BPB** (5-seed standard error of the mean: ~0.00021).
2325

2426
### Diagnostics
2527

@@ -28,6 +30,8 @@ Beats [PR #1394](https://github.com/openai/parameter-golf/pull/1394) (1.08563) b
2830
| 0 | 1.08717 | 1.09895 | 1.08220 | 1.07743 | 333.6 | 31.8 | 22.38% |
2931
| 42 | 1.08781 | 1.09932 | 1.08262 | 1.07808 | 344.8 | 32.5 | 22.38% |
3032
| 1234 | 1.08820 | 1.09898 | 1.08352 | 1.07848 | 334.5 | 31.7 | 22.38% |
33+
| 1337 | 1.08772 | 1.09918 | 1.08246 | 1.07801 | 338.4 | 31.9 | 22.38% |
34+
| 2025 | 1.08842 | 1.09957 | 1.08306 | 1.07862 | 333.4 | 32.0 | 22.38% |
3135

3236
## Key Innovations
3337

records/track_10min_16mb/2026-04-07_SP8192_ParallelResid7_Loop35_NgramTilt/submission.json

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,16 +1,18 @@
11
{
22
"name": "SP8192 + Parallel Residuals + 3-Layer Recurrence + Legal N-gram Tilt",
3-
"val_bpb": 1.07800,
4-
"val_loss": 2.78457,
5-
"bytes_total": 15996192,
6-
"blurb": "3-lever stack on top of PR #1394 sp8192 baseline: (1) GPT-J parallel residuals on layers 7-10 (PR #1412 @Robby955), (2) 3-layer depth recurrence (loop layers 3-5 twice instead of 4-5 twice), (3) eval-time causal n-gram tilt with one-token exponential rescaling (PR #1420 @abaybektursun, lineage PR #1145 @AnirudhRahul). All four issue #1017 conditions verified. C++ n-gram kernel ported from PR #1420 with nanobind dependency removed (ctypes shim). 3-seed mean 1.07800 BPB beats PR #1394 (1.08563) by 0.01971 nats per token, beats PR #1420 (1.08014) by 0.00553 nats per token, beats own PR #1413 (1.08279) by 0.01237 nats per token.",
3+
"val_bpb": 1.07813,
4+
"val_loss": 2.78491,
5+
"bytes_total": 15992215,
6+
"blurb": "3-lever stack on top of PR #1394 sp8192 baseline: (1) GPT-J parallel residuals on layers 7-10 (PR #1412 @Robby955), (2) 3-layer depth recurrence (loop layers 3-5 twice instead of 4-5 twice), (3) eval-time causal n-gram tilt with one-token exponential rescaling (PR #1420 @abaybektursun, lineage PR #1145 @AnirudhRahul). All four issue #1017 conditions verified. C++ n-gram kernel ported from PR #1420 with nanobind dependency removed (ctypes shim). 5-seed mean 1.07813 BPB (std 0.00046) beats PR #1394 (1.08563) by 0.01938 nats per token, beats PR #1420 (1.08014) by 0.00520 nats per token, beats own PR #1413 (1.08279) by 0.01205 nats per token.",
77
"author": "dexhunter",
88
"github_id": "dexhunter",
99
"date": "2026-04-07",
1010
"seed_results": {
1111
"0": {"val_bpb": 1.07743, "val_loss": 2.78312, "steps": 4911},
1212
"42": {"val_bpb": 1.07808, "val_loss": 2.78479, "steps": 4913},
13-
"1234": {"val_bpb": 1.07848, "val_loss": 2.78581, "steps": 4905}
13+
"1234": {"val_bpb": 1.07848, "val_loss": 2.78581, "steps": 4905},
14+
"1337": {"val_bpb": 1.07801, "val_loss": 2.78461, "steps": 4909},
15+
"2025": {"val_bpb": 1.07862, "val_loss": 2.78620, "steps": 4908}
1416
},
1517
"lineage": [
1618
"PR #1394 (clarkkev) — sp8192 base",
Lines changed: 295 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,295 @@
1+
2+
*****************************************
3+
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
4+
*****************************************
5+
Hyperparameters:
6+
adam_eps: 1e-08
7+
adam_wd: 0.02
8+
asymmetric_logit: False
9+
beta1: 0.9
10+
beta2: 0.95
11+
cautious_wd: False
12+
compressor: brotli
13+
data_dir: /home/dex/parameter-golf-with-cc/data
14+
datasets_dir: /home/dex/parameter-golf-with-cc/data/datasets/fineweb10B_sp8192
15+
distributed: True
16+
ema_decay: 0.997
17+
embed_bits: 8
18+
embed_clip_sigmas: 20.0
19+
embed_lr: 0.6
20+
embed_wd: 0.085
21+
embedding_dim: 512
22+
enable_looping_at: 0.5
23+
etlb_clip: 3.0
24+
etlb_enabled: False
25+
etlb_lr: 0.05
26+
etlb_steps: 5
27+
eval_seq_len: 2048
28+
eval_stride: 64
29+
gptq_calibration_batches: 64
30+
gptq_reserve_seconds: 12.0
31+
grad_accum_steps: 1
32+
grad_clip_norm: 0.3
33+
head_lr: 0.008
34+
hessian_clip_lambda: 0.0
35+
int5_layers_str:
36+
is_main_process: True
37+
iterations: 20000
38+
ln_scale: True
39+
local_rank: 0
40+
logfile: logs/pr1394_par7_loop35_ngram_qk5_ttt_s1337.txt
41+
logit_softcap: 30.0
42+
loop_end: 5
43+
loop_start: 3
44+
matrix_bits: 6
45+
matrix_clip_sigmas: 12.85
46+
matrix_lr: 0.02
47+
max_wallclock_seconds: 600.0
48+
min_lr: 0.0
49+
mixed_quant: False
50+
mlp_mult: 4.0
51+
model_dim: 512
52+
model_path: final_model.pt
53+
muon_backend_steps: 5
54+
muon_beta2: 0.95
55+
muon_momentum: 0.99
56+
muon_momentum_warmup_start: 0.92
57+
muon_momentum_warmup_steps: 1500
58+
muon_row_normalize: True
59+
muon_wd: 0.085
60+
n_int6_layers: 50
61+
ngram_agree_bonus: 0.1
62+
ngram_base_beta: 2.0
63+
ngram_open_table_bits: 26
64+
ngram_order_stride: 2
65+
ngram_tilt_enabled: True
66+
ngram_within_beta: 0.92
67+
ngram_within_threshold: 0.25
68+
ngram_word_beta: 0.5
69+
ngram_word_threshold: 0.8
70+
num_heads: 8
71+
num_kv_heads: 4
72+
num_layers: 11
73+
num_loops: 2
74+
parallel_residual_start: 7
75+
qk_gain_init: 5.0
76+
quantized_model_path: final_model.int6.ptz
77+
rank: 0
78+
rope_base: 10000.0
79+
rope_dims: 16
80+
rope_train_seq_len: 2048
81+
run_id: pr1394_par7_loop35_ngram_qk5_ttt_s1337
82+
scalar_lr: 0.02
83+
seed: 1337
84+
skip_gates_enabled: True
85+
sliding_window_enabled: True
86+
tie_embeddings: True
87+
tied_embed_init_std: 0.005
88+
tied_embed_lr: 0.03
89+
tokenizer_path: /home/dex/parameter-golf-with-cc/data/tokenizers/fineweb_8192_bpe.model
90+
train_batch_tokens: 786432
91+
train_files: /home/dex/parameter-golf-with-cc/data/datasets/fineweb10B_sp8192/fineweb_train_*.bin
92+
train_log_every: 500
93+
train_seq_len: 2048
94+
ttt_batch_seqs: 32
95+
ttt_chunk_tokens: 32768
96+
ttt_enabled: True
97+
ttt_epochs: 3
98+
ttt_freeze_blocks: 0
99+
ttt_grad_clip: 1.0
100+
ttt_lr: 0.005
101+
ttt_momentum: 0.9
102+
use_polar_express: False
103+
val_batch_tokens: 524288
104+
val_files: /home/dex/parameter-golf-with-cc/data/datasets/fineweb10B_sp8192/fineweb_val_*.bin
105+
val_loss_every: 4000
106+
vocab_size: 8192
107+
warmdown_frac: 0.667
108+
warmup_steps: 20
109+
world_size: 8
110+
xsa_last_n: 11
111+
train_shards: 80
112+
val_tokens: 40540160
113+
model_params:35944536
114+
gptq:reserving 12s, effective=588000ms
115+
warmup_step: 1/20
116+
warmup_step: 2/20
117+
warmup_step: 3/20
118+
warmup_step: 4/20
119+
warmup_step: 5/20
120+
warmup_step: 6/20
121+
warmup_step: 10/20
122+
warmup_step: 20/20
123+
loop_warmup:enabled encoder:[0, 1, 2, 3, 4, 5, 3, 4] decoder:[5, 3, 4, 5, 6, 7, 8, 9, 10]
124+
loop_warmup_step: 1/20
125+
loop_warmup_step: 2/20
126+
loop_warmup_step: 3/20
127+
loop_warmup_step: 4/20
128+
loop_warmup_step: 5/20
129+
loop_warmup_step: 6/20
130+
loop_warmup_step: 10/20
131+
loop_warmup_step: 20/20
132+
0/20000 val_loss: 9.0047 val_bpb: 3.4860
133+
1/20000 train_loss: 9.0060 train_time: 0.0m tok/s: 8323843
134+
2/20000 train_loss: 12.2777 train_time: 0.0m tok/s: 8185758
135+
3/20000 train_loss: 10.9105 train_time: 0.0m tok/s: 8095004
136+
4/20000 train_loss: 9.4144 train_time: 0.0m tok/s: 8052278
137+
5/20000 train_loss: 8.2870 train_time: 0.0m tok/s: 8026824
138+
500/20000 train_loss: 3.3745 train_time: 0.8m tok/s: 7790966
139+
1000/20000 train_loss: 3.2761 train_time: 1.7m tok/s: 7801926
140+
1500/20000 train_loss: 3.1781 train_time: 2.5m tok/s: 7810344
141+
2000/20000 train_loss: 3.0679 train_time: 3.4m tok/s: 7812312
142+
2500/20000 train_loss: 3.1518 train_time: 4.2m tok/s: 7812217
143+
layer_loop:enabled step:2921 frac:0.500 encoder:[0, 1, 2, 3, 4, 5, 3, 4] decoder:[5, 3, 4, 5, 6, 7, 8, 9, 10]
144+
3000/20000 train_loss: 2.9648 train_time: 5.1m tok/s: 7716745
145+
3500/20000 train_loss: 2.9715 train_time: 6.3m tok/s: 7248413
146+
4000/20000 train_loss: 2.8525 train_time: 7.6m tok/s: 6933232
147+
4000/20000 val_loss: 2.9088 val_bpb: 1.1261
148+
4500/20000 train_loss: 2.8783 train_time: 8.8m tok/s: 6706977
149+
4909/20000 val_loss: 2.8117 val_bpb: 1.0885
150+
stopping_early: wallclock_cap train_time: 588117ms step: 4909/20000
151+
peak memory allocated: 39046 MiB reserved: 39070 MiB
152+
ema:applying EMA weights
153+
pre-quantization post-ema val_loss:2.80969528 val_bpb:1.08772060 eval_time:6096ms
154+
Serialized model: 135431033 bytes
155+
Code size: 19060 bytes
156+
GPTQ:collecting Hessians from calibration data...
157+
GPTQ:collected 67 Hessians in 12.7s
158+
Quantized weights:
159+
gptq (int6): blocks.attn.c_k.weight, blocks.attn.c_q.weight, blocks.attn.c_v.weight, blocks.attn.proj.weight, blocks.mlp.fc.weight, blocks.mlp.proj.weight
160+
gptq (int8): tok_emb.weight
161+
passthrough (float16): blocks.attn.q_gain, blocks.attn_scale, blocks.mlp_scale, blocks.resid_mix, skip_gates, skip_weights
162+
Serialized model quantized+brotli: 15968979 bytes
163+
Total submission size quantized+brotli: 15988039 bytes
164+
quantized val_loss:2.83930586 val_bpb:1.09918378 eval_time:8577ms
165+
quantized_sliding_window val_loss:2.79610141 val_bpb:1.08245799 eval_time:91730ms
166+
ngram_tilt:precompute n_tok=40540161 hints=9074676 (22.38%) elapsed=31.9s base_beta=2.0 within_beta=0.92 agree_bonus=0.1
167+
ttt_sliding:start chunks=1238 chunk_tokens=32768 total_windows=633409 stride=64 ttt_lr=0.005 ttt_epochs=3 freeze_blocks=0
168+
ttt_sliding:params unfrozen=35944536 frozen=0
169+
ttt_chunk [1/1238] bpb=1.113223 time=4.7s
170+
ttt_chunk [11/1238] bpb=1.069067 time=9.2s
171+
ttt_chunk [21/1238] bpb=1.105682 time=11.8s
172+
ttt_chunk [31/1238] bpb=1.100420 time=14.4s
173+
ttt_chunk [41/1238] bpb=1.093499 time=17.0s
174+
ttt_chunk [51/1238] bpb=1.086986 time=19.7s
175+
ttt_chunk [61/1238] bpb=1.078628 time=22.3s
176+
ttt_chunk [71/1238] bpb=1.085817 time=25.0s
177+
ttt_chunk [81/1238] bpb=1.079146 time=27.6s
178+
ttt_chunk [91/1238] bpb=1.075438 time=30.2s
179+
ttt_chunk [101/1238] bpb=1.075157 time=32.9s
180+
ttt_chunk [111/1238] bpb=1.073582 time=35.5s
181+
ttt_chunk [121/1238] bpb=1.076583 time=38.2s
182+
ttt_chunk [131/1238] bpb=1.080321 time=40.8s
183+
ttt_chunk [141/1238] bpb=1.080886 time=43.4s
184+
ttt_chunk [151/1238] bpb=1.080726 time=46.1s
185+
ttt_chunk [161/1238] bpb=1.081189 time=48.7s
186+
ttt_chunk [171/1238] bpb=1.080984 time=51.3s
187+
ttt_chunk [181/1238] bpb=1.079458 time=54.0s
188+
ttt_chunk [191/1238] bpb=1.079154 time=56.6s
189+
ttt_chunk [201/1238] bpb=1.076698 time=59.2s
190+
ttt_chunk [211/1238] bpb=1.081125 time=61.8s
191+
ttt_chunk [221/1238] bpb=1.081424 time=64.5s
192+
ttt_chunk [231/1238] bpb=1.083161 time=67.1s
193+
ttt_chunk [241/1238] bpb=1.081362 time=69.7s
194+
ttt_chunk [251/1238] bpb=1.081298 time=72.4s
195+
ttt_chunk [261/1238] bpb=1.082325 time=75.0s
196+
ttt_chunk [271/1238] bpb=1.082803 time=77.6s
197+
ttt_chunk [281/1238] bpb=1.082125 time=80.3s
198+
ttt_chunk [291/1238] bpb=1.083265 time=82.9s
199+
ttt_chunk [301/1238] bpb=1.083449 time=85.6s
200+
ttt_chunk [311/1238] bpb=1.082205 time=88.5s
201+
ttt_chunk [321/1238] bpb=1.082036 time=91.4s
202+
ttt_chunk [331/1238] bpb=1.082226 time=94.0s
203+
ttt_chunk [341/1238] bpb=1.081367 time=96.7s
204+
ttt_chunk [351/1238] bpb=1.082163 time=99.3s
205+
ttt_chunk [361/1238] bpb=1.081050 time=101.9s
206+
ttt_chunk [371/1238] bpb=1.079511 time=104.5s
207+
ttt_chunk [381/1238] bpb=1.079913 time=107.2s
208+
ttt_chunk [391/1238] bpb=1.079554 time=109.8s
209+
ttt_chunk [401/1238] bpb=1.079598 time=112.5s
210+
ttt_chunk [411/1238] bpb=1.080104 time=115.1s
211+
ttt_chunk [421/1238] bpb=1.079595 time=117.7s
212+
ttt_chunk [431/1238] bpb=1.079783 time=120.4s
213+
ttt_chunk [441/1238] bpb=1.079820 time=123.0s
214+
ttt_chunk [451/1238] bpb=1.081019 time=125.7s
215+
ttt_chunk [461/1238] bpb=1.079249 time=128.3s
216+
ttt_chunk [471/1238] bpb=1.079234 time=130.9s
217+
ttt_chunk [481/1238] bpb=1.079400 time=133.6s
218+
ttt_chunk [491/1238] bpb=1.079845 time=136.5s
219+
ttt_chunk [501/1238] bpb=1.079463 time=139.2s
220+
ttt_chunk [511/1238] bpb=1.079152 time=141.8s
221+
ttt_chunk [521/1238] bpb=1.078696 time=144.5s
222+
ttt_chunk [531/1238] bpb=1.078634 time=147.7s
223+
ttt_chunk [541/1238] bpb=1.078702 time=150.4s
224+
ttt_chunk [551/1238] bpb=1.078241 time=153.1s
225+
ttt_chunk [561/1238] bpb=1.077545 time=155.7s
226+
ttt_chunk [571/1238] bpb=1.076982 time=158.4s
227+
ttt_chunk [581/1238] bpb=1.077342 time=161.1s
228+
ttt_chunk [591/1238] bpb=1.077551 time=163.7s
229+
ttt_chunk [601/1238] bpb=1.077510 time=166.4s
230+
ttt_chunk [611/1238] bpb=1.078097 time=169.1s
231+
ttt_chunk [621/1238] bpb=1.078935 time=171.8s
232+
ttt_chunk [631/1238] bpb=1.078996 time=174.4s
233+
ttt_chunk [641/1238] bpb=1.079425 time=177.1s
234+
ttt_chunk [651/1238] bpb=1.079751 time=179.8s
235+
ttt_chunk [661/1238] bpb=1.079081 time=182.4s
236+
ttt_chunk [671/1238] bpb=1.078863 time=185.1s
237+
ttt_chunk [681/1238] bpb=1.080181 time=187.8s
238+
ttt_chunk [691/1238] bpb=1.080361 time=190.4s
239+
ttt_chunk [701/1238] bpb=1.080161 time=193.1s
240+
ttt_chunk [711/1238] bpb=1.080845 time=195.8s
241+
ttt_chunk [721/1238] bpb=1.081152 time=198.5s
242+
ttt_chunk [731/1238] bpb=1.080504 time=201.1s
243+
ttt_chunk [741/1238] bpb=1.080201 time=203.8s
244+
ttt_chunk [751/1238] bpb=1.079266 time=206.5s
245+
ttt_chunk [761/1238] bpb=1.078662 time=209.1s
246+
ttt_chunk [771/1238] bpb=1.077649 time=211.8s
247+
ttt_chunk [781/1238] bpb=1.077644 time=214.5s
248+
ttt_chunk [791/1238] bpb=1.078000 time=217.1s
249+
ttt_chunk [801/1238] bpb=1.078270 time=219.8s
250+
ttt_chunk [811/1238] bpb=1.077775 time=222.5s
251+
ttt_chunk [821/1238] bpb=1.076585 time=225.1s
252+
ttt_chunk [831/1238] bpb=1.076276 time=227.8s
253+
ttt_chunk [841/1238] bpb=1.075817 time=230.5s
254+
ttt_chunk [851/1238] bpb=1.075519 time=233.1s
255+
ttt_chunk [861/1238] bpb=1.075169 time=235.8s
256+
ttt_chunk [871/1238] bpb=1.075050 time=238.4s
257+
ttt_chunk [881/1238] bpb=1.074616 time=241.1s
258+
ttt_chunk [891/1238] bpb=1.074084 time=243.7s
259+
ttt_chunk [901/1238] bpb=1.074450 time=246.4s
260+
ttt_chunk [911/1238] bpb=1.074148 time=249.1s
261+
ttt_chunk [921/1238] bpb=1.074436 time=251.7s
262+
ttt_chunk [931/1238] bpb=1.075113 time=254.4s
263+
ttt_chunk [941/1238] bpb=1.075496 time=257.0s
264+
ttt_chunk [951/1238] bpb=1.075427 time=259.7s
265+
ttt_chunk [961/1238] bpb=1.076268 time=262.4s
266+
ttt_chunk [971/1238] bpb=1.076655 time=265.0s
267+
ttt_chunk [981/1238] bpb=1.077032 time=267.7s
268+
ttt_chunk [991/1238] bpb=1.076823 time=270.3s
269+
ttt_chunk [1001/1238] bpb=1.076884 time=273.0s
270+
ttt_chunk [1011/1238] bpb=1.077229 time=275.7s
271+
ttt_chunk [1021/1238] bpb=1.077949 time=278.3s
272+
ttt_chunk [1031/1238] bpb=1.078436 time=281.0s
273+
ttt_chunk [1041/1238] bpb=1.078896 time=283.7s
274+
ttt_chunk [1051/1238] bpb=1.078823 time=286.3s
275+
ttt_chunk [1061/1238] bpb=1.078820 time=289.0s
276+
ttt_chunk [1071/1238] bpb=1.078974 time=291.6s
277+
ttt_chunk [1081/1238] bpb=1.078853 time=294.3s
278+
ttt_chunk [1091/1238] bpb=1.079037 time=297.0s
279+
ttt_chunk [1101/1238] bpb=1.079561 time=299.6s
280+
ttt_chunk [1111/1238] bpb=1.079850 time=302.3s
281+
ttt_chunk [1121/1238] bpb=1.080025 time=305.0s
282+
ttt_chunk [1131/1238] bpb=1.079680 time=307.6s
283+
ttt_chunk [1141/1238] bpb=1.079329 time=310.3s
284+
ttt_chunk [1151/1238] bpb=1.079369 time=313.0s
285+
ttt_chunk [1161/1238] bpb=1.079503 time=315.6s
286+
ttt_chunk [1171/1238] bpb=1.079282 time=318.3s
287+
ttt_chunk [1181/1238] bpb=1.078825 time=321.0s
288+
ttt_chunk [1191/1238] bpb=1.078962 time=323.6s
289+
ttt_chunk [1201/1238] bpb=1.079014 time=326.3s
290+
ttt_chunk [1211/1238] bpb=1.078705 time=329.0s
291+
ttt_chunk [1221/1238] bpb=1.078246 time=331.6s
292+
ttt_chunk [1231/1238] bpb=1.077894 time=334.3s
293+
ttt_chunk [1238/1238] bpb=1.077892 time=338.1s
294+
ttt_sliding:done val_loss=2.784614 val_bpb=1.078011 elapsed=338.3s
295+
legal_ttt_exact val_loss:2.78461379 val_bpb:1.07801077 eval_time:338446ms

0 commit comments

Comments
 (0)