Skip to content

Add mirrored-recurrence MLX non-record submission#84

Open
cschubiner wants to merge 1 commit intoopenai:mainfrom
cschubiner:codex/parameter-golf-mlx-local-submission
Open

Add mirrored-recurrence MLX non-record submission#84
cschubiner wants to merge 1 commit intoopenai:mainfrom
cschubiner:codex/parameter-golf-mlx-local-submission

Conversation

@cschubiner
Copy link
Copy Markdown

Adds a non-record mirrored-recurrence submission under records/track_non_record_16mb/2026-03-19_MirrorRecurrence_MLX_M5Max_sp1024.

Summary:

  • Apple Silicon MLX local run on M5 Max
  • 18 logical layers with 9 unique transformer blocks via mirrored recurrence
  • Intended as an interesting non-record experiment under the 16MB cap
  • Includes submission materials in the record folder

This PR only adds the new records folder for the submission.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 6853b49028

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".


Configuration:
- Hardware: Apple `M5 Max`, MLX `0.31.1`
- Data: published `fineweb10B_sp1024` export, full validation split, `1/195` training shards
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Specify the exact single training shard used

This README says the run used the published fineweb10B_sp1024 export with 1/195 train shards, but it never identifies which shard was kept or how DATA_PATH was prepared. The checked-in train.log shows the actual run only saw one shard (train_shards:1/195), so rerunning the documented command against a normal fineweb10B_sp1024 export will train on all 195 shards and produce a materially different experiment. As written, the submission is not reproducible.

Useful? React with 👍 / 👎.

Comment on lines +410 to +413
self.encoder_schedule = [i % self.unique_layers for i in range(self.num_encoder_layers)]
mirrored_decoder = list(reversed(self.encoder_schedule[: self.num_skip_weights]))
trailing_decoder = [i % self.unique_layers for i in range(self.num_skip_weights, self.num_decoder_layers)]
self.decoder_schedule = mirrored_decoder + trailing_decoder
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Reject UNIQUE_LAYERS settings that leave blocks unused

This schedule only uses every allocated block when UNIQUE_LAYERS <= NUM_LAYERS // 2 or UNIQUE_LAYERS == NUM_LAYERS. For configurations that the constructor currently accepts in between those ranges—for example NUM_LAYERS=18, UNIQUE_LAYERS=12encoder_schedule is still 0..8 and decoder_schedule becomes 8..0, so blocks[9:] are dead parameters that never participate in the forward pass but still count toward the 16 MB budget. Either the schedule needs to cover all unique blocks or those settings should be rejected.

Useful? React with 👍 / 👎.

@MatoTeziTanka
Copy link
Copy Markdown

MatoTeziTanka commented Apr 11, 2026

Community Review — Add mirrored-recurrence MLX non-record submission

Compliance: NEEDS AUTHOR ACTION — train_gpt.py fails to import on CT2038 (Python 3.10 / torch 2.10.0+cpu)

What I found: The CPU smoke test on CT2038 (proteus-engine, 128 GB RAM, Triton 3.6.0, flash_attn stub, cutlass_evt_fusion stub) failed at the import step with:

ModuleNotFoundError: No module named 'mlx'

A few of the common patterns I've seen for this class of error in the 2026-04-11 sweep:

Recommendation: Could you run python3 -c "import py_compile; py_compile.compile('train_gpt.py')" on your records-folder train_gpt.py under Python 3.10 specifically? The eval image is Python 3.10 per Issue #17 / the README, so any parse error on 3.10 blocks the submission at import time before any of the scored-eval logic runs.

Once the parse/import issue is fixed, I'll re-run the compliance audit through the normal pipeline. No other flags identified yet because the audit halts at the import step.


Reviewed by @MatoTeziTankaThe Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): IMPORT_FAIL — ModuleNotFoundError: No module named 'mlx'. Classification via classify_prs.py AST-based classifier; full compliance audit deferred until the import issue is resolved. Auto-drafted from a template and spot-checked before posting.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants