Skip to content

[ET-VK] Implement missing Vulkan operators for Parakeet TDT model#18059

Merged
meta-codesync[bot] merged 8 commits intogh/SS-JIA/476/basefrom
gh/SS-JIA/476/head
Mar 18, 2026
Merged

[ET-VK] Implement missing Vulkan operators for Parakeet TDT model#18059
meta-codesync[bot] merged 8 commits intogh/SS-JIA/476/basefrom
gh/SS-JIA/476/head

Conversation

@SS-JIA
Copy link
Contributor

@SS-JIA SS-JIA commented Mar 10, 2026

Stack from ghstack (oldest at bottom):

Add missing operators needed for Parakeet TDT model support:

  • New symint ops: sym_sub, sym_floordiv, sym_mul in SymIntOps.cpp;
    register operator.floordiv and operator.mul as ephemeral ops in
    op_registry.py
  • New tensor ops: bitwise_not (via unary_op shader with uint8 DTYPE),
    logical_and (alias for bitwise_and dispatch)
  • Improve _to_copy: expand dtype support to FP_INT_BOOL_T and use
    pick_io_storage_fn to restrict to CONTIGUOUS_BUFFER for non-fp
    conversions
  • Fix where resize: compute output shape via broadcast across all tensor
    inputs instead of always using the second input's shape
  • Add symint support to split: use extract_int_or_symint_list instead of
    get_int_list in resize_split_node and split_with_sizes_copy_default
  • Mark scalar_tensor as supporting resize

Differential Revision: D95970159

cc @manuelcandales @digantdesai @cbilgin

Add missing operators needed for Parakeet TDT model support:

- New symint ops: sym_sub, sym_floordiv, sym_mul in SymIntOps.cpp;
  register operator.floordiv and operator.mul as ephemeral ops in
  op_registry.py
- New tensor ops: bitwise_not (via unary_op shader with uint8 DTYPE),
  logical_and (alias for bitwise_and dispatch)
- Improve _to_copy: expand dtype support to FP_INT_BOOL_T and use
  pick_io_storage_fn to restrict to CONTIGUOUS_BUFFER for non-fp
  conversions
- Fix where resize: compute output shape via broadcast across all tensor
  inputs instead of always using the second input's shape
- Add symint support to split: use extract_int_or_symint_list instead of
  get_int_list in resize_split_node and split_with_sizes_copy_default
- Mark scalar_tensor as supporting resize

Differential Revision: [D95970159](https://our.internmc.facebook.com/intern/diff/D95970159/)

[ghstack-poisoned]
@pytorch-bot pytorch-bot bot added the module: vulkan Issues related to the Vulkan delegate and code under backends/vulkan/ label Mar 10, 2026
@pytorch-bot
Copy link

pytorch-bot bot commented Mar 10, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/18059

Note: Links to docs will display an error until the docs builds have been completed.

❌ 2 New Failures, 11 Cancelled Jobs, 1 Unrelated Failure

As of commit d6d9825 with merge base 22174fa (image):

NEW FAILURES - The following jobs have failed:

CANCELLED JOBS - The following jobs were cancelled. Please retry:

FLAKY - The following job failed but was likely due to flakiness present on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@github-actions
Copy link

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

ssjia added 4 commits March 11, 2026 09:52
…T model"

Add missing operators needed for Parakeet TDT model support:

- New symint ops: sym_sub, sym_floordiv, sym_mul in SymIntOps.cpp;
  register operator.floordiv and operator.mul as ephemeral ops in
  op_registry.py
- New tensor ops: bitwise_not (via unary_op shader with uint8 DTYPE),
  logical_and (alias for bitwise_and dispatch)
- Improve _to_copy: expand dtype support to FP_INT_BOOL_T and use
  pick_io_storage_fn to restrict to CONTIGUOUS_BUFFER for non-fp
  conversions
- Fix where resize: compute output shape via broadcast across all tensor
  inputs instead of always using the second input's shape
- Add symint support to split: use extract_int_or_symint_list instead of
  get_int_list in resize_split_node and split_with_sizes_copy_default
- Mark scalar_tensor as supporting resize

Differential Revision: [D95970159](https://our.internmc.facebook.com/intern/diff/D95970159/)

cc manuelcandales digantdesai cbilgin

[ghstack-poisoned]
…T model"

Add missing operators needed for Parakeet TDT model support:

- New symint ops: sym_sub, sym_floordiv, sym_mul in SymIntOps.cpp;
  register operator.floordiv and operator.mul as ephemeral ops in
  op_registry.py
- New tensor ops: bitwise_not (via unary_op shader with uint8 DTYPE),
  logical_and (alias for bitwise_and dispatch)
- Improve _to_copy: expand dtype support to FP_INT_BOOL_T and use
  pick_io_storage_fn to restrict to CONTIGUOUS_BUFFER for non-fp
  conversions
- Fix where resize: compute output shape via broadcast across all tensor
  inputs instead of always using the second input's shape
- Add symint support to split: use extract_int_or_symint_list instead of
  get_int_list in resize_split_node and split_with_sizes_copy_default
- Mark scalar_tensor as supporting resize

Differential Revision: [D95970159](https://our.internmc.facebook.com/intern/diff/D95970159/)

cc manuelcandales digantdesai cbilgin

[ghstack-poisoned]
…T model"

Add missing operators needed for Parakeet TDT model support:

- New symint ops: sym_sub, sym_floordiv, sym_mul in SymIntOps.cpp;
  register operator.floordiv and operator.mul as ephemeral ops in
  op_registry.py
- New tensor ops: bitwise_not (via unary_op shader with uint8 DTYPE),
  logical_and (alias for bitwise_and dispatch)
- Improve _to_copy: expand dtype support to FP_INT_BOOL_T and use
  pick_io_storage_fn to restrict to CONTIGUOUS_BUFFER for non-fp
  conversions
- Fix where resize: compute output shape via broadcast across all tensor
  inputs instead of always using the second input's shape
- Add symint support to split: use extract_int_or_symint_list instead of
  get_int_list in resize_split_node and split_with_sizes_copy_default
- Mark scalar_tensor as supporting resize

Differential Revision: [D95970159](https://our.internmc.facebook.com/intern/diff/D95970159/)

cc manuelcandales digantdesai cbilgin

[ghstack-poisoned]
…T model"

Add missing operators needed for Parakeet TDT model support:

- New symint ops: sym_sub, sym_floordiv, sym_mul in SymIntOps.cpp;
  register operator.floordiv and operator.mul as ephemeral ops in
  op_registry.py
- New tensor ops: bitwise_not (via unary_op shader with uint8 DTYPE),
  logical_and (alias for bitwise_and dispatch)
- Improve _to_copy: expand dtype support to FP_INT_BOOL_T and use
  pick_io_storage_fn to restrict to CONTIGUOUS_BUFFER for non-fp
  conversions
- Fix where resize: compute output shape via broadcast across all tensor
  inputs instead of always using the second input's shape
- Add symint support to split: use extract_int_or_symint_list instead of
  get_int_list in resize_split_node and split_with_sizes_copy_default
- Mark scalar_tensor as supporting resize

Differential Revision: [D95970159](https://our.internmc.facebook.com/intern/diff/D95970159/)

cc manuelcandales digantdesai cbilgin

[ghstack-poisoned]
…T model"

Add missing operators needed for Parakeet TDT model support:

- New symint ops: sym_sub, sym_floordiv, sym_mul in SymIntOps.cpp;
  register operator.floordiv and operator.mul as ephemeral ops in
  op_registry.py
- New tensor ops: bitwise_not (via unary_op shader with uint8 DTYPE),
  logical_and (alias for bitwise_and dispatch)
- Improve _to_copy: expand dtype support to FP_INT_BOOL_T and use
  pick_io_storage_fn to restrict to CONTIGUOUS_BUFFER for non-fp
  conversions
- Fix where resize: compute output shape via broadcast across all tensor
  inputs instead of always using the second input's shape
- Add symint support to split: use extract_int_or_symint_list instead of
  get_int_list in resize_split_node and split_with_sizes_copy_default
- Mark scalar_tensor as supporting resize

Differential Revision: [D95970159](https://our.internmc.facebook.com/intern/diff/D95970159/)

cc manuelcandales digantdesai cbilgin

[ghstack-poisoned]
ssjia added 2 commits March 17, 2026 11:27
…T model"

Add missing operators needed for Parakeet TDT model support:

- New symint ops: sym_sub, sym_floordiv, sym_mul in SymIntOps.cpp;
  register operator.floordiv and operator.mul as ephemeral ops in
  op_registry.py
- New tensor ops: bitwise_not (via unary_op shader with uint8 DTYPE),
  logical_and (alias for bitwise_and dispatch)
- Improve _to_copy: expand dtype support to FP_INT_BOOL_T and use
  pick_io_storage_fn to restrict to CONTIGUOUS_BUFFER for non-fp
  conversions
- Fix where resize: compute output shape via broadcast across all tensor
  inputs instead of always using the second input's shape
- Add symint support to split: use extract_int_or_symint_list instead of
  get_int_list in resize_split_node and split_with_sizes_copy_default
- Mark scalar_tensor as supporting resize

Differential Revision: [D95970159](https://our.internmc.facebook.com/intern/diff/D95970159/)

cc manuelcandales digantdesai cbilgin

[ghstack-poisoned]
…T model"

Add missing operators needed for Parakeet TDT model support:

- New symint ops: sym_sub, sym_floordiv, sym_mul in SymIntOps.cpp;
  register operator.floordiv and operator.mul as ephemeral ops in
  op_registry.py
- New tensor ops: bitwise_not (via unary_op shader with uint8 DTYPE),
  logical_and (alias for bitwise_and dispatch)
- Improve _to_copy: expand dtype support to FP_INT_BOOL_T and use
  pick_io_storage_fn to restrict to CONTIGUOUS_BUFFER for non-fp
  conversions
- Fix where resize: compute output shape via broadcast across all tensor
  inputs instead of always using the second input's shape
- Add symint support to split: use extract_int_or_symint_list instead of
  get_int_list in resize_split_node and split_with_sizes_copy_default
- Mark scalar_tensor as supporting resize

Differential Revision: [D95970159](https://our.internmc.facebook.com/intern/diff/D95970159/)

cc manuelcandales digantdesai cbilgin

[ghstack-poisoned]
@meta-codesync meta-codesync bot merged commit 8539c47 into gh/SS-JIA/476/base Mar 18, 2026
204 of 220 checks passed
@meta-codesync meta-codesync bot deleted the gh/SS-JIA/476/head branch March 18, 2026 01:48
@meta-codesync meta-codesync bot temporarily deployed to cherry-pick-bot March 18, 2026 01:48 Inactive
SS-JIA pushed a commit that referenced this pull request Mar 18, 2026
Pull Request resolved: #18059

Add missing operators needed for Parakeet TDT model support:

- New symint ops: sym_sub, sym_floordiv, sym_mul in SymIntOps.cpp;
  register operator.floordiv and operator.mul as ephemeral ops in
  op_registry.py
- New tensor ops: bitwise_not (via unary_op shader with uint8 DTYPE),
  logical_and (alias for bitwise_and dispatch)
- Improve _to_copy: expand dtype support to FP_INT_BOOL_T and use
  pick_io_storage_fn to restrict to CONTIGUOUS_BUFFER for non-fp
  conversions
- Fix where resize: compute output shape via broadcast across all tensor
  inputs instead of always using the second input's shape
- Add symint support to split: use extract_int_or_symint_list instead of
  get_int_list in resize_split_node and split_with_sizes_copy_default
- Mark scalar_tensor as supporting resize
ghstack-source-id: 353546692
@exported-using-ghexport

Differential Revision: [D95970159](https://our.internmc.facebook.com/intern/diff/D95970159/)
SS-JIA pushed a commit that referenced this pull request Mar 18, 2026
Pull Request resolved: #18059

Add missing operators needed for Parakeet TDT model support:

- New symint ops: sym_sub, sym_floordiv, sym_mul in SymIntOps.cpp;
  register operator.floordiv and operator.mul as ephemeral ops in
  op_registry.py
- New tensor ops: bitwise_not (via unary_op shader with uint8 DTYPE),
  logical_and (alias for bitwise_and dispatch)
- Improve _to_copy: expand dtype support to FP_INT_BOOL_T and use
  pick_io_storage_fn to restrict to CONTIGUOUS_BUFFER for non-fp
  conversions
- Fix where resize: compute output shape via broadcast across all tensor
  inputs instead of always using the second input's shape
- Add symint support to split: use extract_int_or_symint_list instead of
  get_int_list in resize_split_node and split_with_sizes_copy_default
- Mark scalar_tensor as supporting resize
ghstack-source-id: 353546692
@exported-using-ghexport

Differential Revision: [D95970159](https://our.internmc.facebook.com/intern/diff/D95970159/)
SS-JIA pushed a commit that referenced this pull request Mar 18, 2026
Pull Request resolved: #18059

Add missing operators needed for Parakeet TDT model support:

- New symint ops: sym_sub, sym_floordiv, sym_mul in SymIntOps.cpp;
  register operator.floordiv and operator.mul as ephemeral ops in
  op_registry.py
- New tensor ops: bitwise_not (via unary_op shader with uint8 DTYPE),
  logical_and (alias for bitwise_and dispatch)
- Improve _to_copy: expand dtype support to FP_INT_BOOL_T and use
  pick_io_storage_fn to restrict to CONTIGUOUS_BUFFER for non-fp
  conversions
- Fix where resize: compute output shape via broadcast across all tensor
  inputs instead of always using the second input's shape
- Add symint support to split: use extract_int_or_symint_list instead of
  get_int_list in resize_split_node and split_with_sizes_copy_default
- Mark scalar_tensor as supporting resize
ghstack-source-id: 353546692
@exported-using-ghexport

Differential Revision: [D95970159](https://our.internmc.facebook.com/intern/diff/D95970159/)
SS-JIA pushed a commit that referenced this pull request Mar 18, 2026
Pull Request resolved: #18059

Add missing operators needed for Parakeet TDT model support:

- New symint ops: sym_sub, sym_floordiv, sym_mul in SymIntOps.cpp;
  register operator.floordiv and operator.mul as ephemeral ops in
  op_registry.py
- New tensor ops: bitwise_not (via unary_op shader with uint8 DTYPE),
  logical_and (alias for bitwise_and dispatch)
- Improve _to_copy: expand dtype support to FP_INT_BOOL_T and use
  pick_io_storage_fn to restrict to CONTIGUOUS_BUFFER for non-fp
  conversions
- Fix where resize: compute output shape via broadcast across all tensor
  inputs instead of always using the second input's shape
- Add symint support to split: use extract_int_or_symint_list instead of
  get_int_list in resize_split_node and split_with_sizes_copy_default
- Mark scalar_tensor as supporting resize
ghstack-source-id: 353546692
@exported-using-ghexport

Differential Revision: [D95970159](https://our.internmc.facebook.com/intern/diff/D95970159/)
SS-JIA pushed a commit that referenced this pull request Mar 18, 2026
Pull Request resolved: #18059

Add missing operators needed for Parakeet TDT model support:

- New symint ops: sym_sub, sym_floordiv, sym_mul in SymIntOps.cpp;
  register operator.floordiv and operator.mul as ephemeral ops in
  op_registry.py
- New tensor ops: bitwise_not (via unary_op shader with uint8 DTYPE),
  logical_and (alias for bitwise_and dispatch)
- Improve _to_copy: expand dtype support to FP_INT_BOOL_T and use
  pick_io_storage_fn to restrict to CONTIGUOUS_BUFFER for non-fp
  conversions
- Fix where resize: compute output shape via broadcast across all tensor
  inputs instead of always using the second input's shape
- Add symint support to split: use extract_int_or_symint_list instead of
  get_int_list in resize_split_node and split_with_sizes_copy_default
- Mark scalar_tensor as supporting resize
ghstack-source-id: 353546692
@exported-using-ghexport

Differential Revision: [D95970159](https://our.internmc.facebook.com/intern/diff/D95970159/)
SS-JIA pushed a commit that referenced this pull request Mar 18, 2026
Pull Request resolved: #18059

Add missing operators needed for Parakeet TDT model support:

- New symint ops: sym_sub, sym_floordiv, sym_mul in SymIntOps.cpp;
  register operator.floordiv and operator.mul as ephemeral ops in
  op_registry.py
- New tensor ops: bitwise_not (via unary_op shader with uint8 DTYPE),
  logical_and (alias for bitwise_and dispatch)
- Improve _to_copy: expand dtype support to FP_INT_BOOL_T and use
  pick_io_storage_fn to restrict to CONTIGUOUS_BUFFER for non-fp
  conversions
- Fix where resize: compute output shape via broadcast across all tensor
  inputs instead of always using the second input's shape
- Add symint support to split: use extract_int_or_symint_list instead of
  get_int_list in resize_split_node and split_with_sizes_copy_default
- Mark scalar_tensor as supporting resize
ghstack-source-id: 353546692
@exported-using-ghexport

Differential Revision: [D95970159](https://our.internmc.facebook.com/intern/diff/D95970159/)
SS-JIA pushed a commit that referenced this pull request Mar 18, 2026
Pull Request resolved: #18059

Add missing operators needed for Parakeet TDT model support:

- New symint ops: sym_sub, sym_floordiv, sym_mul in SymIntOps.cpp;
  register operator.floordiv and operator.mul as ephemeral ops in
  op_registry.py
- New tensor ops: bitwise_not (via unary_op shader with uint8 DTYPE),
  logical_and (alias for bitwise_and dispatch)
- Improve _to_copy: expand dtype support to FP_INT_BOOL_T and use
  pick_io_storage_fn to restrict to CONTIGUOUS_BUFFER for non-fp
  conversions
- Fix where resize: compute output shape via broadcast across all tensor
  inputs instead of always using the second input's shape
- Add symint support to split: use extract_int_or_symint_list instead of
  get_int_list in resize_split_node and split_with_sizes_copy_default
- Mark scalar_tensor as supporting resize
ghstack-source-id: 353546692
@exported-using-ghexport

Differential Revision: [D95970159](https://our.internmc.facebook.com/intern/diff/D95970159/)
SS-JIA pushed a commit that referenced this pull request Mar 18, 2026
Pull Request resolved: #18059

Add missing operators needed for Parakeet TDT model support:

- New symint ops: sym_sub, sym_floordiv, sym_mul in SymIntOps.cpp;
  register operator.floordiv and operator.mul as ephemeral ops in
  op_registry.py
- New tensor ops: bitwise_not (via unary_op shader with uint8 DTYPE),
  logical_and (alias for bitwise_and dispatch)
- Improve _to_copy: expand dtype support to FP_INT_BOOL_T and use
  pick_io_storage_fn to restrict to CONTIGUOUS_BUFFER for non-fp
  conversions
- Fix where resize: compute output shape via broadcast across all tensor
  inputs instead of always using the second input's shape
- Add symint support to split: use extract_int_or_symint_list instead of
  get_int_list in resize_split_node and split_with_sizes_copy_default
- Mark scalar_tensor as supporting resize
ghstack-source-id: 353546692
@exported-using-ghexport

Differential Revision: [D95970159](https://our.internmc.facebook.com/intern/diff/D95970159/)
SS-JIA pushed a commit that referenced this pull request Mar 18, 2026
Pull Request resolved: #18059

Add missing operators needed for Parakeet TDT model support:

- New symint ops: sym_sub, sym_floordiv, sym_mul in SymIntOps.cpp;
  register operator.floordiv and operator.mul as ephemeral ops in
  op_registry.py
- New tensor ops: bitwise_not (via unary_op shader with uint8 DTYPE),
  logical_and (alias for bitwise_and dispatch)
- Improve _to_copy: expand dtype support to FP_INT_BOOL_T and use
  pick_io_storage_fn to restrict to CONTIGUOUS_BUFFER for non-fp
  conversions
- Fix where resize: compute output shape via broadcast across all tensor
  inputs instead of always using the second input's shape
- Add symint support to split: use extract_int_or_symint_list instead of
  get_int_list in resize_split_node and split_with_sizes_copy_default
- Mark scalar_tensor as supporting resize
ghstack-source-id: 353546692
@exported-using-ghexport

Differential Revision: [D95970159](https://our.internmc.facebook.com/intern/diff/D95970159/)
SS-JIA pushed a commit that referenced this pull request Mar 18, 2026
Pull Request resolved: #18059

Add missing operators needed for Parakeet TDT model support:

- New symint ops: sym_sub, sym_floordiv, sym_mul in SymIntOps.cpp;
  register operator.floordiv and operator.mul as ephemeral ops in
  op_registry.py
- New tensor ops: bitwise_not (via unary_op shader with uint8 DTYPE),
  logical_and (alias for bitwise_and dispatch)
- Improve _to_copy: expand dtype support to FP_INT_BOOL_T and use
  pick_io_storage_fn to restrict to CONTIGUOUS_BUFFER for non-fp
  conversions
- Fix where resize: compute output shape via broadcast across all tensor
  inputs instead of always using the second input's shape
- Add symint support to split: use extract_int_or_symint_list instead of
  get_int_list in resize_split_node and split_with_sizes_copy_default
- Mark scalar_tensor as supporting resize
ghstack-source-id: 353546692
@exported-using-ghexport

Differential Revision: [D95970159](https://our.internmc.facebook.com/intern/diff/D95970159/)
SS-JIA pushed a commit that referenced this pull request Mar 18, 2026
Pull Request resolved: #18059

Add missing operators needed for Parakeet TDT model support:

- New symint ops: sym_sub, sym_floordiv, sym_mul in SymIntOps.cpp;
  register operator.floordiv and operator.mul as ephemeral ops in
  op_registry.py
- New tensor ops: bitwise_not (via unary_op shader with uint8 DTYPE),
  logical_and (alias for bitwise_and dispatch)
- Improve _to_copy: expand dtype support to FP_INT_BOOL_T and use
  pick_io_storage_fn to restrict to CONTIGUOUS_BUFFER for non-fp
  conversions
- Fix where resize: compute output shape via broadcast across all tensor
  inputs instead of always using the second input's shape
- Add symint support to split: use extract_int_or_symint_list instead of
  get_int_list in resize_split_node and split_with_sizes_copy_default
- Mark scalar_tensor as supporting resize
ghstack-source-id: 353546692
@exported-using-ghexport

Differential Revision: [D95970159](https://our.internmc.facebook.com/intern/diff/D95970159/)
SS-JIA pushed a commit that referenced this pull request Mar 18, 2026
Pull Request resolved: #18059

Add missing operators needed for Parakeet TDT model support:

- New symint ops: sym_sub, sym_floordiv, sym_mul in SymIntOps.cpp;
  register operator.floordiv and operator.mul as ephemeral ops in
  op_registry.py
- New tensor ops: bitwise_not (via unary_op shader with uint8 DTYPE),
  logical_and (alias for bitwise_and dispatch)
- Improve _to_copy: expand dtype support to FP_INT_BOOL_T and use
  pick_io_storage_fn to restrict to CONTIGUOUS_BUFFER for non-fp
  conversions
- Fix where resize: compute output shape via broadcast across all tensor
  inputs instead of always using the second input's shape
- Add symint support to split: use extract_int_or_symint_list instead of
  get_int_list in resize_split_node and split_with_sizes_copy_default
- Mark scalar_tensor as supporting resize
ghstack-source-id: 353546692
@exported-using-ghexport

Differential Revision: [D95970159](https://our.internmc.facebook.com/intern/diff/D95970159/)
SS-JIA pushed a commit that referenced this pull request Mar 18, 2026
Pull Request resolved: #18059

Add missing operators needed for Parakeet TDT model support:

- New symint ops: sym_sub, sym_floordiv, sym_mul in SymIntOps.cpp;
  register operator.floordiv and operator.mul as ephemeral ops in
  op_registry.py
- New tensor ops: bitwise_not (via unary_op shader with uint8 DTYPE),
  logical_and (alias for bitwise_and dispatch)
- Improve _to_copy: expand dtype support to FP_INT_BOOL_T and use
  pick_io_storage_fn to restrict to CONTIGUOUS_BUFFER for non-fp
  conversions
- Fix where resize: compute output shape via broadcast across all tensor
  inputs instead of always using the second input's shape
- Add symint support to split: use extract_int_or_symint_list instead of
  get_int_list in resize_split_node and split_with_sizes_copy_default
- Mark scalar_tensor as supporting resize
ghstack-source-id: 353546692
@exported-using-ghexport

Differential Revision: [D95970159](https://our.internmc.facebook.com/intern/diff/D95970159/)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported module: vulkan Issues related to the Vulkan delegate and code under backends/vulkan/

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants