Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
136 commits
Select commit Hold shift + click to select a range
66a6839
exp: depth recurrence (7 physical layers, schedule [0,1,2,3,4,3,4,5,6])
leon2k2k2k Apr 17, 2026
ea1898a
exp: update depth recurrence to Option B (9L, same param budget)
leon2k2k2k Apr 17, 2026
a7d5459
add experiment scripts for sweep session
leon2k2k2k Apr 17, 2026
0f74a42
exp: add LeakyReLU, parallel residuals, staged recurrence
leon2k2k2k Apr 17, 2026
6e2da72
fix: move staged recurrence outside forward() for torch.compile
leon2k2k2k Apr 17, 2026
843834d
exp: add EMA weights + partial RoPE (16/64 dims)
leon2k2k2k Apr 17, 2026
7d2063b
exp: GPTQ-lite adaptive quantization (per-row MSE-optimal clip)
leon2k2k2k Apr 17, 2026
5b3001f
exp: disable EMA by default (broken with INT8 quantization)
leon2k2k2k Apr 17, 2026
f776719
fix: manual GQA for PyTorch compat + 2xH100 launch scripts
leon2k2k2k Apr 18, 2026
454d742
beating-sota: decoded SOTA code + checkpoint saving + 8xH100 scripts
leon2k2k2k Apr 18, 2026
d529fe8
add BigramHash, hotstart.py, infrastructure setup verified
leon2k2k2k Apr 18, 2026
625a41d
research: scaffold research/execution workflow
leon2k2k2k Apr 18, 2026
8191cc8
fix: skills need SKILL.md subdir format, not flat .md
leon2k2k2k Apr 18, 2026
7b1c6d8
docs: worktree workflow for exp/<slug> branches
leon2k2k2k Apr 18, 2026
fb353d2
docs: clarify who decides on smoke test
leon2k2k2k Apr 18, 2026
01e6fcf
docs: hardware ladder is 2×H100 mini → 8×H100 official (drop 1×H100)
leon2k2k2k Apr 18, 2026
50e880b
spec 000: fill in pinned commit + expand checkpoint plan to phase-bou…
leon2k2k2k Apr 18, 2026
39094a5
spec 000: add QK_GAIN_INIT=5.25 and TTT_ENABLED=1 to config diff
leon2k2k2k Apr 18, 2026
9174c4d
spec 000: fix cost estimate to ~20min/\$6; add architecture.md
leon2k2k2k Apr 18, 2026
718fbdb
rename architecture.md → current_architecture.md
leon2k2k2k Apr 18, 2026
ca657cc
spec 000: complete first replication run + lessons learned
leon2k2k2k Apr 18, 2026
79ebce0
spec 001: freeze Hessian-SDClip λ screen
leon2k2k2k Apr 18, 2026
de0aec6
diary: spec 000 throughput deficit + research phase + spec 001 decisions
leon2k2k2k Apr 18, 2026
c3f9a6d
spec 001: hessian-sdclip sweep complete, no signal
leon2k2k2k Apr 18, 2026
001f6be
spec 002: freeze SWA + EMA blend screen
leon2k2k2k Apr 18, 2026
b44c34e
spec 001: evaluation + shelve Hessian-SDClip
leon2k2k2k Apr 18, 2026
77085f6
spec 003: freeze BigramHash paired from-scratch screen
leon2k2k2k Apr 18, 2026
a44437d
spec 003: switch to single-run screen matching Exp 24 config
leon2k2k2k Apr 18, 2026
e3573d1
spec 002: swa+ema blend screen complete, no signal
leon2k2k2k Apr 18, 2026
3825019
spec 002: evaluation + shelve SWA+EMA
leon2k2k2k Apr 18, 2026
abbd3a0
ideas: add swiglu-activation and ar-selfgen-gptq-calib candidates
leon2k2k2k Apr 18, 2026
11e1f22
spec 003: bigram-hash screen complete, no signal
leon2k2k2k Apr 19, 2026
03f2588
diary: session wrap — 4 specs, 3 kills, $21.25, fleet empty
leon2k2k2k Apr 19, 2026
feaf45e
spec 003 eval: BigramHash killed (+0.00118 vs Exp 24), signal gate mi…
leon2k2k2k Apr 19, 2026
baa4115
spec 004: freeze QK-gain extension triage screen
leon2k2k2k Apr 19, 2026
3fb8524
spec 004: QK-gain extension killed — tied within noise
leon2k2k2k Apr 19, 2026
503a116
diary: spec 004 null + two methodology bugs + strategic reckoning
leon2k2k2k Apr 19, 2026
a095351
spec 006: dense-checkpoint full-curve run for training-dynamics analysis
leon2k2k2k Apr 19, 2026
74f9df2
spec 006: add per-step loss logging + per-layer grad norms
leon2k2k2k Apr 19, 2026
b85dbe5
spec 006: rescope to full-curve dynamics scan
leon2k2k2k Apr 19, 2026
f2401ad
research/dynamics: consolidate knowledge + analysis skeletons
leon2k2k2k Apr 19, 2026
6af013f
spec 006 post-run findings: flat zone closed, recurrence bump real
leon2k2k2k Apr 19, 2026
87bf8ed
spec 007: delayed-warmdown hotstart screen
leon2k2k2k Apr 19, 2026
154c9b8
spec 008: import PR #1736 submission directory
leon2k2k2k Apr 20, 2026
416b0e4
spec 008 + baseline migration to #1736
leon2k2k2k Apr 20, 2026
9306f89
spec 008: single seed, pre-GPTQ checkpoint for quant hotstart
leon2k2k2k Apr 20, 2026
d95dd08
spec 009: SpinQuant V1 hotstart on top of #1736
leon2k2k2k Apr 20, 2026
07e43e7
specs 008/009: fill in code-level specifics from reading train_gpt.py
leon2k2k2k Apr 20, 2026
c8c5749
spec 009: collapse into single-session ABC sweep on one pod
leon2k2k2k Apr 20, 2026
b948031
spec 009: run all three variants unconditionally
leon2k2k2k Apr 20, 2026
a552fba
spec 009: add baseline mode, fold spec 008's missed gate number in
leon2k2k2k Apr 20, 2026
1572115
spec 009: implement spinquant_hotstart.py (baseline + R_a-only modes)
leon2k2k2k Apr 20, 2026
cd31f01
spec 009: align with implemented scope (2 modes, not 4)
leon2k2k2k Apr 20, 2026
611811d
spec 010 + SpinQuant notes addendum on #1695's online-rotation approach
leon2k2k2k Apr 20, 2026
17c8521
swap spec numbering: 010 = port_1695 online rotation, 011 = tapered WD
leon2k2k2k Apr 20, 2026
6456188
spec 009: rebank unbanked state_dict before load
leon2k2k2k Apr 20, 2026
9b87109
diary 2026-04-20: baseline migration to #1736 + SpinQuant design sprint
leon2k2k2k Apr 20, 2026
b47a252
spec 010: port #1695's online Hadamard rotation into train_gpt.py
leon2k2k2k Apr 20, 2026
ff52a06
spec 010b: site-selective SpinQuant (SPINQUANT_SITES env var)
leon2k2k2k Apr 20, 2026
92cd417
diary 2026-04-20 pt.2: SpinQuant results + regime-dependence finding
leon2k2k2k Apr 20, 2026
8815c4d
idea: rotation regime-dependence (long-context help, short-context hurt)
leon2k2k2k Apr 20, 2026
16d39a7
diary 2026-04-20 pt.3: SpinQuant exhausted, methodology correction
leon2k2k2k Apr 20, 2026
5123db7
diary pt.4: PR scan for novel levers to port onto #1736
leon2k2k2k Apr 20, 2026
e94286b
spec 012: training-bundle + idea files (GradPower, per-layer QK)
leon2k2k2k Apr 20, 2026
1d6f275
spec 012: pin commit 8d54854
leon2k2k2k Apr 20, 2026
9610e5a
spec 012: drop QK from first pass, add 2H smoke + early-stop protocol
leon2k2k2k Apr 20, 2026
7485280
spec 012: soften early-stop to joint executor+user monitoring
leon2k2k2k Apr 20, 2026
9fb038f
renumber training-bundle 012 → 011; shelve old 011-tapered-wd
leon2k2k2k Apr 20, 2026
2b4d12e
ideas: BPB-weighted CE loss (#1519) + BigramHash embed (#1716)
leon2k2k2k Apr 20, 2026
b9e0ab2
spec 011: bump pinned commit to 893cefd (rotary-fix)
leon2k2k2k Apr 20, 2026
d3d177d
spec 013: BigramHash auxiliary embedding spec
leon2k2k2k Apr 20, 2026
a33d31d
spec 013: add pre-registered expectations table for train_loss curve
leon2k2k2k Apr 20, 2026
9c2aaf8
spec 011 evaluation: GradPower null, WD taper never engaged
leon2k2k2k Apr 20, 2026
ba07a14
spec 013 evaluation: BigramHash shelved (single-seed +0.0025 regression)
leon2k2k2k Apr 20, 2026
737faf3
spec 014: BPB-weighted CE loss spec
leon2k2k2k Apr 20, 2026
2209501
spec 014: add Thoughts section (rationale + outcome matrix + meta-story)
leon2k2k2k Apr 20, 2026
6c55441
research notes: recurrence + parallel residuals + XSA literature
leon2k2k2k Apr 20, 2026
ca370ff
research notes: recurrence activation schedule literature
leon2k2k2k Apr 20, 2026
6f91e47
mark recurrence-parallel-literature.md as active research thread
leon2k2k2k Apr 20, 2026
87ac1c2
major update: incorporate PG-internal recurrence history, kill 3 of 5…
leon2k2k2k Apr 20, 2026
18b3775
spec 014: formally shelved (+0.0619 regression, 3rd null in push)
leon2k2k2k Apr 20, 2026
f56f98a
spec 015: Recur-Alpha learnable per-pass blending
leon2k2k2k Apr 20, 2026
e8f0450
research: spec 015 schedule trace plot
leon2k2k2k Apr 20, 2026
1276cf5
spec 015: fix ambiguous stop-early criterion that halted first smoke
leon2k2k2k Apr 20, 2026
ecdbb83
EXECUTION.md: always set TORCHINDUCTOR_CACHE_DIR for compile cache reuse
leon2k2k2k Apr 20, 2026
ba84e91
EXECUTION.md: standardize progress-update table format
leon2k2k2k Apr 20, 2026
304c552
train_gpt/EXECUTION: save checkpoint before code-size; preflight pymi…
leon2k2k2k Apr 20, 2026
ac4708f
spec 016: freeze commit hash 4dd2d63 for recur-alpha-ones
leon2k2k2k Apr 20, 2026
9ca2bf8
diary + ideas: recur-alpha findings + beating-1736 analysis
leon2k2k2k Apr 21, 2026
c10e747
spec 016b: recur-alpha throughput diagnostic
leon2k2k2k Apr 21, 2026
9cf57ef
spec 016b: add warmup run, cut iterations 300 -> 150
leon2k2k2k Apr 21, 2026
b901a5e
spec 017: recur-alpha matched-throughput full-pipeline run
leon2k2k2k Apr 21, 2026
eda03cb
spec 017: reframe for JP (NA unavailable), tighten goal to "just run …
leon2k2k2k Apr 21, 2026
457a0a8
spec 018: pin commit 97d9854 (torch.lerp blend optimization)
leon2k2k2k Apr 21, 2026
d538512
017 retroactive note: TTT forward path ignores recur_alpha
leon2k2k2k Apr 21, 2026
5fdd383
specs 018 + 018b: pin commit hashes with TTT fix included
leon2k2k2k Apr 21, 2026
6213ec3
spec 018c: hardcoded α throughput test (Python float constants)
leon2k2k2k Apr 21, 2026
03e8bba
evals 015/016/016b/018/018b/018c: throughput diagnostic + lerp/bake-i…
leon2k2k2k Apr 21, 2026
e8681bc
spec 019: hardcoded α full-pipeline on 8xH100 (Path A)
leon2k2k2k Apr 21, 2026
acb1f43
spec 019: update commit hash to 3c3a134 (fix pass-2 L5 alpha constant)
leon2k2k2k Apr 21, 2026
a0f3b34
spec 019: full pipeline result — post-TTT 1.06744, missed by pod lottery
leon2k2k2k Apr 21, 2026
727bd06
008 notes: fill in post-TTT 1.06728 measured via spec 009 baseline run
leon2k2k2k Apr 21, 2026
3719151
eval 019b: algebraic lerp null result — post-TTT 1.06628, misses #173…
leon2k2k2k Apr 21, 2026
605db46
diary + specs + evals: 017-to-025b arc, spec 026 ready
leon2k2k2k Apr 22, 2026
7f161d5
eval 025b/025c: per-pass differentiation null — 025b promoted to spec…
leon2k2k2k Apr 22, 2026
7d74dd5
spec 027 + frontier-scan 2026-04-22-2: LoRA warm-start-A + depth curr…
leon2k2k2k Apr 22, 2026
7cb02ce
spec 026: add LoRA warm-start-A (d70888f), TTT_LORA_ALPHA=144, TTT_WE…
leon2k2k2k Apr 22, 2026
c8c6c90
idea: match-1769-baseline — run clip=12+TTT on 008 model to find true…
leon2k2k2k Apr 22, 2026
c777db7
spec 028: post-train clip=12 + TTT diagnostic on 008 model
leon2k2k2k Apr 22, 2026
413aeec
remove spec 028 — redundant, spec 009 already ran clip=12 (1.0673 see…
leon2k2k2k Apr 22, 2026
c429953
spec 026: seed 314 + fix sanity greps + eval seed_42 + experiments row
leon2k2k2k Apr 22, 2026
0509760
memory: dexhunter seed strategy + always-push feedback + gap diagnosis
leon2k2k2k Apr 22, 2026
c84e615
spec 029: full stack 025b + LoRA warm-start-A + depth curriculum + se…
leon2k2k2k Apr 22, 2026
f698ef6
spec 029: update commit to c3a99b3 (curriculum alpha_info fix + pre-w…
leon2k2k2k Apr 22, 2026
66aac0e
spec 029: shorten mini to 400s — depth upgrade fires at 80s, no need …
leon2k2k2k Apr 22, 2026
33a8df5
spec 029: enable full TTT pipeline on 4xH screen — post-TTT gate repl…
leon2k2k2k Apr 22, 2026
67cf013
spec 028: TTT-only on 026 screen seed 314 float with new TTT settings…
leon2k2k2k Apr 22, 2026
aad7765
spec 028: replace with 026 seed_42 old-vs-new TTT comparison
leon2k2k2k Apr 22, 2026
a894152
delete spec 027 (lora-warmstart-depth-curriculum): superseded by 029 …
leon2k2k2k Apr 22, 2026
9bf137d
spec 028: PHASED_TTT_ENABLED=1 → 3 (0=slow TTT not disabled)
leon2k2k2k Apr 22, 2026
a655046
spec 030: 025b seed 314 + correct PHASED_TTT=3 + warm-start-A
leon2k2k2k Apr 22, 2026
24c314b
sdlfkj
leon2k2k2k Apr 22, 2026
1cac69b
Add direct-carry calibration variants for spec 031
leon2k2k2k Apr 22, 2026
7efcff5
Add draft 032 alpha-beta recurrence curriculum mode
leon2k2k2k Apr 22, 2026
fb868f2
Record draft implementation commit for spec 032
leon2k2k2k Apr 22, 2026
bff18a4
Freeze spec 032 launch contract
leon2k2k2k Apr 22, 2026
3e4c6e0
Pin commit hash in spec 032
leon2k2k2k Apr 22, 2026
303a1c3
Fix spec 032 launch contract
leon2k2k2k Apr 22, 2026
ec2a507
Log carry snapshots to artifact file
leon2k2k2k Apr 22, 2026
7f805b9
Update spec 032 for carry snapshot artifact
leon2k2k2k Apr 22, 2026
97528b4
Update research specs and evaluations
leon2k2k2k Apr 23, 2026
0fd61cb
Add 034c/034d LR schedule specs
leon2k2k2k Apr 23, 2026
d7ae414
Add 034c execution artifacts and guardrails
leon2k2k2k Apr 23, 2026
26da5c6
Spec 038 smear gate plus LQER asym promotion
leon2k2k2k Apr 24, 2026
a471c88
Pin spec 038 branch commit
leon2k2k2k Apr 24, 2026
795c96e
Record: #1787 + Sparse Gate + Updated Frozen Carry — val_bpb 1.06287
leon2k2k2k Apr 24, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
1 change: 1 addition & 0 deletions .claude/scheduled_tasks.lock
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{"sessionId":"845a7b07-cc9e-4b32-9356-527eaa592da2","pid":3874006,"acquiredAt":1776804744293}
26 changes: 26 additions & 0 deletions .claude/scripts/discord_post.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
#!/usr/bin/env bash
# Post a message to Discord. Text from $1 or stdin.
# Usage:
# discord_post.sh "one-line message"
# echo -e "multi\nline" | discord_post.sh
set -euo pipefail

SECRETS="${DISCORD_SECRETS:-/home/claude-user/.claude/skills/secrets.env}"
set -a; source "$SECRETS"; set +a

if [ $# -ge 1 ]; then
MSG="$1"
else
MSG="$(cat)"
fi

# Discord hard-caps message content at 2000 chars.
if [ "${#MSG}" -gt 1990 ]; then
MSG="${MSG:0:1980}…(truncated)"
fi

curl -sS -X POST \
-H "Authorization: Bot $DISCORD_BOT_TOKEN" \
-H "Content-Type: application/json" \
--data-raw "$(jq -n --arg c "$MSG" '{content:$c}')" \
"https://discord.com/api/v10/channels/$DISCORD_CHANNEL/messages" > /dev/null
20 changes: 20 additions & 0 deletions .claude/scripts/discord_post_table.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
#!/usr/bin/env bash
# Post a monospace code-block (table / multi-line) to Discord.
# Usage:
# discord_post_table.sh < path/to/file
# cat table.txt | discord_post_table.sh
# discord_post_table.sh "line1\nline2\nline3"
set -euo pipefail

HERE="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"

if [ $# -ge 1 ]; then
BODY="$(printf '%b' "$1")"
else
BODY="$(cat)"
fi

# Wrap in triple-backtick code block for fixed-width rendering.
MSG="$(printf '```\n%s\n```' "$BODY")"

printf '%s' "$MSG" | "$HERE/discord_post.sh"
37 changes: 37 additions & 0 deletions .claude/skills/execution/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
---
name: execution
description: Activate execution role for the Parameter Golf repo. Invoke at the start of an execution session (pod is live or about to be). Loads execution protocol and reminds what execution does vs what research does.
---

# Execution role

You are in **execution mode** for the Parameter Golf record-track push.

## What execution does
- Reads `CLAUDE.md` and `EXECUTION.md` at the top of this repo (authoritative).
- Is handed one spec number at a time. Reads **only** `research/specs/NNN-slug.md`, plus the two top-level docs. Does not browse other specs.
- Interviews the spec with the user before launching (see `EXECUTION.md` §"Spec interview protocol"). Surfaces ambiguities, resolves open questions.
- Runs preflight checklist before every launch.
- Follows the hardware ladder: 2×H100 mini → 8×H100 official, per the spec.
- Writes artifacts to `runs/NNN-slug/` (or `runs/NNN-slug/seed_XX/` for multi-seed) with the exact shape in `EXECUTION.md`.
- Stops the pod immediately after eval.
- Hands back a one-paragraph summary to the user.

## What execution does NOT do
- Never modifies training logic code. Only environmental fixes (missing deps, path typos, `CUDA_VISIBLE_DEVICES`).
- Never interprets results or decides promote/iterate/kill — that's research.
- Never writes rows to `experiments.md`.
- Never writes `research/evaluations/`.
- Never launches without a completed spec interview + passed preflight.

## First actions on session start
1. Read `CLAUDE.md` and `EXECUTION.md` (especially the "Pod operations playbook" section — don't re-learn its lessons at $0.40/min).
2. Ask the user which spec number to run.
3. Open that spec and begin the interview.

## Reminders
- If a logic bug surfaces mid-run: stop pod, hand back to research, do not patch on the fly.
- Stop pods immediately after every run (`runpodctl pod stop <id>`). Same-day → stop. End-of-day → `runpodctl pod delete <id>` to fully terminate.
- **Rsync artifacts BEFORE stopping the pod**, not after. Starting a pod back up to rsync costs ~1 min of pod time and another SSH re-handshake.
- `final.json` is the deliverable — if it's not written, the run is lost.
- Checkpoints live on the NA-1 volume, not in git. Git gets `checkpoints.md` pointer files only.
85 changes: 85 additions & 0 deletions .claude/skills/frontier-scan/HEURISTICS.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
# Frontier-scan heuristics

Used by the frontier-scan skill to classify each PR. Update this file when new
rulings land or new patterns emerge.

## Our baseline

**PR #1736** (dexhunter) — val_bpb **1.06549** (3-seed mean).
Stack: SP8192 + CaseOps tokenizer + SmearGate + AttnOutGate + QuantGate +
Loop45 depth-recurrence + phased TTT (multi-phase global SGD + doc-indep LoRA).

## Legitimacy categories

### `clean`
No open disputes. Training and eval procedures follow competition rules.

### `tokenizer-disputed`
Uses a non-standard tokenizer. Apply Issue #1604 lossless-roundtrip test:
- **Likely legal:** lossless CaseOps — `decode(encode(s)) == s` holds; byte
sidecar recovers original bytes; exact byte count recoverable from token IDs.
- **Likely illegal:** lossy casefold (`.lower()`) — destroys case information;
byte count cannot be reproduced from token IDs alone.

### `prequant-ttt-disputed`
Issue #1017 Condition 3: "you may not test-time train on validation data before
the artifact is frozen." A pre-quant TTT pass runs AdamW/SGD updates on val
tokens, then freezes weights — this is the disallowed pattern.
- Physics ceiling check: bigbag's empirical corpus-level TTT ceiling is
**~0.0003 bpb**. Any submission claiming >0.005 bpb from TTT-related levers
deserves scrutiny. A claim of ~0.038 bpb (as in #1735) is ~100× over the
ceiling — assign 85-90% probability of illegal ruling.

### `byte-bug-suspect`
GatedDeltaNet / FLA cluster (Issue #1719): `build_sentencepiece_luts` double-
counts the byte denominator by ~17.46%. Corrected bpb is ~18% higher than
claimed. PRs: #1698, #1711, #1712, #1734 (three self-closed). Flag any PR
using flash-linear-attention (`from fla.` import) for this check.

### `broken`
Artifact oversized, training crash, or claim definitively refuted.

### `other`
Non-record track, WIP, negative-result, or reproduction only. Skip unless it
contains an extractable novel lever.

## Absorption rule

Training-time levers (architecture changes, optimizer changes) are at risk of
being absorbed by #1736's phased TTT — TTT adapts the model at eval time and
can compensate for a missing training-time improvement. Quant-side levers have
been empirically shown absorbed (specs 009/010/010b). Eval-time levers
(Tap-In, score-first TTT, hash embeddings trained only via TTT) are downstream
of TTT and **cannot be absorbed**.

## Already-in-#1736 (exclude from "novel lever" classification)

SP8192 tokenizer, CaseOps, SmearGate, AttnOutGate, QuantGate, Loop45, phased
TTT (multi-phase SGD + LoRA), VarLen attention + fused MLP (TMA), Muon
matrix_lr=0.026, per-layer adaptive GPTQ clip (MLP/ATTN/MATRIX sigmas), int8
embeddings, logit_softcap, XSA (`F.normalize(v, dim=-1)`), RMSNorm Q/K,
`leaky_relu(x,0.5).square()` MLP activation, GPTQ mixed int6/int8.

## Already-specced (don't re-flag as new)

| Spec | Lever |
|------|-------|
| 011 | Tapered weight decay |
| 012 | GradPower Muon p=0.9 |
| 013 | xIELU + per-layer QK gain |
| 014 | Tap-In min_match=1 |

## Banned mechanisms (always illegal)

- Lossy casefold (Issue #1604)
- Pre-quant TTT (Issue #1017 Condition 3)
- N-gram eval cache
- Trinity/SLOT
- GatedDeltaNet / FLA byte-bug cluster — treat any `from fla.` import as a
red flag requiring byte-accounting audit before accepting claimed bpb.

## Actionable-delta threshold

Flag a PR as **actionable** if its clean claimed bpb is below our baseline
**1.06549**. Flag as **watch** if between 1.065 and 1.075. Above 1.075 is
background noise unless it contains an isolated novel lever.
Loading