Skip to content

Voxtral Realtime: enable bf16 for Metal backend with quantization#17845

Merged
mergennachin merged 2 commits intomainfrom
bf16_voxtral_metal
Mar 18, 2026
Merged

Voxtral Realtime: enable bf16 for Metal backend with quantization#17845
mergennachin merged 2 commits intomainfrom
bf16_voxtral_metal

Conversation

@mergennachin
Copy link
Contributor

The Metal AOTI backend already handles bf16 correctly (fp32 attention
masks, fp32 RoPE upcast, dtype-agnostic KV caches and SDPA). Enable
--dtype bf16 as the default recipe for Metal CI and update all
documentation to recommend bf16 with fpa4w quantization.

@mergennachin mergennachin requested a review from lucylq as a code owner March 4, 2026 14:28
Copilot AI review requested due to automatic review settings March 4, 2026 14:29
@pytorch-bot
Copy link

pytorch-bot bot commented Mar 4, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/17845

Note: Links to docs will display an error until the docs builds have been completed.

⚠️ 1 Awaiting Approval, 21 Pending

As of commit 0e981f1 with merge base b40d6fe (image):

AWAITING APPROVAL - The following workflow needs approval before CI can run:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Mar 4, 2026
@github-actions
Copy link

github-actions bot commented Mar 4, 2026

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Enables and recommends bf16 for Voxtral Realtime exports on Metal when using quantization, updating CI export arguments and user-facing docs to reflect the preferred configuration for memory/throughput.

Changes:

  • Update Voxtral Realtime docs to include bf16 memory footprint numbers and recommend --dtype bf16 for Metal quantized exports.
  • Adjust example Metal export command(s) to include --dtype bf16 alongside fpa4w.
  • Update Metal CI export script to pass --dtype bf16 for the quantized-int4-metal configuration.

Reviewed changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 3 comments.

File Description
examples/models/voxtral_realtime/model.md Updates memory calculations and guidance around bf16 + quantization for Metal/CUDA.
examples/models/voxtral_realtime/export_voxtral_rt.py Updates usage example to show Metal export with bf16 + fpa4w.
examples/models/voxtral_realtime/README.md Updates Metal backend table and export examples to recommend bf16 with fpa4w.
.ci/scripts/export_model_artifact.sh Ensures Metal int4 quantized CI export passes --dtype bf16.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@mergennachin mergennachin marked this pull request as draft March 4, 2026 14:37
@mergennachin mergennachin temporarily deployed to upload-benchmark-results March 4, 2026 15:40 — with GitHub Actions Inactive
The Metal AOTI backend already handles bf16 correctly (fp32 attention
masks, fp32 RoPE upcast, dtype-agnostic KV caches and SDPA). Enable
--dtype bf16 as the default recipe for Metal CI and update all
documentation to recommend bf16 with fpa4w quantization.

Fix a Metal shader compilation bug in the streaming encoder where
bool.to(bf16) generates `bfloat tmp = 0.0;` — Metal Shading Language
doesn't support implicit float-to-bfloat literal conversion. Use
.float() instead and let mul_ handle type promotion.
@mergennachin mergennachin temporarily deployed to upload-benchmark-results March 16, 2026 22:03 — with GitHub Actions Inactive
@mergennachin mergennachin marked this pull request as ready for review March 18, 2026 13:51
Copilot AI review requested due to automatic review settings March 18, 2026 13:51
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 4 out of 4 changed files in this pull request and generated 2 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

You can also share your feedback on Copilot code review. Take the survey.

fp32: ≈ 832 MB, bf16: ≈ 416 MB. Encoder KV caches (streaming):
32 layers × 2 × 1500 × 32 × 64 × bytes_per_elem. fp32: ≈ 786 MB,
bf16: ≈ 393 MB.

Comment on lines 167 to 171
**Metal:** `MetalSDPA` uses `torch.ops.aten._scaled_dot_product_attention_math_for_mps`
which handles GQA natively via `gqa_factor`, avoiding the memory bandwidth
overhead of `repeat_interleave`. Uses explicit additive attention masks
which handles GQA natively (the kernel infers the group ratio from differing
Q vs K/V head counts), avoiding the memory bandwidth overhead of
`repeat_interleave`. Uses explicit additive attention masks
that must match the Q/K/V dtype (the kernel reads masks as `device T*`).
@mergennachin mergennachin merged commit 202c6af into main Mar 18, 2026
318 of 328 checks passed
@mergennachin mergennachin deleted the bf16_voxtral_metal branch March 18, 2026 14:54
@mergennachin mergennachin temporarily deployed to upload-benchmark-results March 18, 2026 14:56 — with GitHub Actions Inactive
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/metal CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants