Skip to content

[ET-VK] Introduce graph runtime shader library that enables dynamic shapes #2366

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 4 commits into from

Conversation

SS-JIA
Copy link
Contributor

@SS-JIA SS-JIA commented Mar 12, 2024

Stack from ghstack (oldest at bottom):

Context

pytorch/pytorch#121598 introduces the ability to support dynamic shapes through tensor metadata updates.

The idea is fairly simple. Instead of shaders accepting a UBO with size data for all arguments:

layout(set = 0, binding = 2) uniform PRECISION restrict Block {
  ivec4 output_sizes;
  ivec4 other_sizes;
  float alpha;
}

Shaders will accept separate UBOs for each piece of tensor metadata:

layout(set = 0, binding = 3) uniform PRECISION restrict OutSizes {
  ivec4 data;
}
out_sizes;

layout(set = 0, binding = 4) uniform PRECISION restrict InSizes {
  ivec4 data;
}
in_sizes;

layout(set = 0, binding = 5) uniform PRECISION restrict OtherSizes {
  ivec4 data;
}
other_sizes;

layout(set = 0, binding = 6) uniform PRECISION restrict Alpha {
  float data;
}
alpha;

Each UBO will be owned and maintained by the corresponding vTensor instance. To support a graph input resize, every tensor in the graph only needs to update their metadata UBOs via the tensor.virtual_resize(new_sizes) call. Shader dispatches in subsequent command buffer submissions will then see the updated metadata and execute as if the tensor were the updated sizes.

This changeset introduces a new shader library for the Vulkan graph runtime that enables dynamic shapes through this technique in favor of relying on the shader library from PyTorch Vulkan.

Considerations

Technically, the UBO update technique can be applied to the shaders from PyTorch Vulkan as well. If that's the case, why introduce a new shader library for the graph runtime?

The primary motivation is code quality.

First, having vTensor supply UBOs for their own metadata greatly reduces the need to have operator specifc ad-hoc Params structs to organize arguments to write into a api::UniformParamsBuffer.

Constructing an ExecuteNode for binary operators is now

  graph.execute_nodes().emplace_back(new ExecuteNode(
      graph,
      api::shader_registry().get_shader_info(kernel_name.str()),
      global_size,
      local_size,
      {{out, api::MemoryAccessType::WRITE},
       {{arg1, arg2}, api::MemoryAccessType::READ}},
      {t_out.gpu_sizes_ubo(),
       t_in1.gpu_sizes_ubo(),
       t_in2.gpu_sizes_ubo(),
       graph.create_params_buffer(alpha_val)}))

instead of

ArithmeticParams block{
      get_size_as_ivec4(t_out),
      get_size_as_ivec4(t_in1),
      get_size_as_ivec4(t_in2),
      alpha_val,
  };
  api::UniformParamsBuffer params(graph.context(), block);

  graph.execute_nodes().emplace_back(new ExecuteNode(
      graph,
      shader,
      global_size,
      local_size,
      {{out, api::MemoryAccessType::WRITE},
       {{arg1, arg2}, api::MemoryAccessType::READ}},
      std::move(params)));

Another consideration is that pytorch/pytorch#115948 which was landed fairly recently enables much more expressive shader templates through the use of Python code blocks in the GLSL template. This enables shader templates that can easily express variants for different data types, packing structures, etc. Introducing a new shader library provides the opportunity to rewrite the shaders in PyTorch Vulkan in a more generic and extensible way.

Differential Revision: D54754545

…hapes

## Context

pytorch/pytorch#121598 introduces the ability to support dynamic shapes through tensor metadata updates.

The idea is fairly simple. Instead of shaders accepting a UBO with size data for all arguments:

```
layout(set = 0, binding = 2) uniform PRECISION restrict Block {
  ivec4 output_sizes;
  ivec4 other_sizes;
  float alpha;
}
```

Shaders will accept separate UBOs for each piece of tensor metadata:

```
layout(set = 0, binding = 3) uniform PRECISION restrict OutSizes {
  ivec4 data;
}
out_sizes;

layout(set = 0, binding = 4) uniform PRECISION restrict InSizes {
  ivec4 data;
}
in_sizes;

layout(set = 0, binding = 5) uniform PRECISION restrict OtherSizes {
  ivec4 data;
}
other_sizes;

layout(set = 0, binding = 6) uniform PRECISION restrict Alpha {
  float data;
}
alpha;
```

Each UBO will be owned and maintained by the corresponding `vTensor` instance. To support a graph input resize, every tensor in the graph only needs to update their metadata UBOs via the `tensor.virtual_resize(new_sizes)` call. Shader dispatches in subsequent command buffer submissions will then see the updated metadata and execute as if the tensor were the updated sizes.

This changeset introduces a new shader library for the Vulkan graph runtime that enables dynamic shapes through this technique in favor of relying on the shader library from PyTorch Vulkan.


## Considerations

Technically, the UBO update technique can be applied to the shaders from PyTorch Vulkan as well. If that's the case, why introduce a new shader library for the graph runtime?

The primary motivation is code quality.

First, having `vTensor` supply UBOs for their own metadata greatly reduces the need to have operator specifc ad-hoc `Params` structs to organize arguments to write into a `api::UniformParamsBuffer`.

Constructing an `ExecuteNode` for binary operators is now

```
  graph.execute_nodes().emplace_back(new ExecuteNode(
      graph,
      api::shader_registry().get_shader_info(kernel_name.str()),
      global_size,
      local_size,
      {{out, api::MemoryAccessType::WRITE},
       {{arg1, arg2}, api::MemoryAccessType::READ}},
      {t_out.gpu_sizes_ubo(),
       t_in1.gpu_sizes_ubo(),
       t_in2.gpu_sizes_ubo(),
       graph.create_params_buffer(alpha_val)}))
```

instead of

```
ArithmeticParams block{
      get_size_as_ivec4(t_out),
      get_size_as_ivec4(t_in1),
      get_size_as_ivec4(t_in2),
      alpha_val,
  };
  api::UniformParamsBuffer params(graph.context(), block);

  graph.execute_nodes().emplace_back(new ExecuteNode(
      graph,
      shader,
      global_size,
      local_size,
      {{out, api::MemoryAccessType::WRITE},
       {{arg1, arg2}, api::MemoryAccessType::READ}},
      std::move(params)));
```

Another consideration is that pytorch/pytorch#115948 which was landed fairly recently enables much more expressive shader templates through the use of Python code blocks in the GLSL template. This enables shader templates that can easily express variants for different data types, packing structures, etc. Introducing a new shader library provides the opportunity to rewrite the shaders in PyTorch Vulkan in a more generic and extensible way.

Differential Revision: [D54754545](https://our.internmc.facebook.com/intern/diff/D54754545/)

[ghstack-poisoned]
Copy link

pytorch-bot bot commented Mar 12, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/2366

Note: Links to docs will display an error until the docs builds have been completed.

❌ 2 New Failures

As of commit 5cdc6cd with merge base 4fea983 (image):

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Mar 12, 2024
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D54754545

…s dynamic shapes"

## Context

pytorch/pytorch#121598 introduces the ability to support dynamic shapes through tensor metadata updates.

The idea is fairly simple. Instead of shaders accepting a UBO with size data for all arguments:

```
layout(set = 0, binding = 2) uniform PRECISION restrict Block {
  ivec4 output_sizes;
  ivec4 other_sizes;
  float alpha;
}
```

Shaders will accept separate UBOs for each piece of tensor metadata:

```
layout(set = 0, binding = 3) uniform PRECISION restrict OutSizes {
  ivec4 data;
}
out_sizes;

layout(set = 0, binding = 4) uniform PRECISION restrict InSizes {
  ivec4 data;
}
in_sizes;

layout(set = 0, binding = 5) uniform PRECISION restrict OtherSizes {
  ivec4 data;
}
other_sizes;

layout(set = 0, binding = 6) uniform PRECISION restrict Alpha {
  float data;
}
alpha;
```

Each UBO will be owned and maintained by the corresponding `vTensor` instance. To support a graph input resize, every tensor in the graph only needs to update their metadata UBOs via the `tensor.virtual_resize(new_sizes)` call. Shader dispatches in subsequent command buffer submissions will then see the updated metadata and execute as if the tensor were the updated sizes.

This changeset introduces a new shader library for the Vulkan graph runtime that enables dynamic shapes through this technique in favor of relying on the shader library from PyTorch Vulkan.


## Considerations

Technically, the UBO update technique can be applied to the shaders from PyTorch Vulkan as well. If that's the case, why introduce a new shader library for the graph runtime?

The primary motivation is code quality.

First, having `vTensor` supply UBOs for their own metadata greatly reduces the need to have operator specifc ad-hoc `Params` structs to organize arguments to write into a `api::UniformParamsBuffer`.

Constructing an `ExecuteNode` for binary operators is now

```
  graph.execute_nodes().emplace_back(new ExecuteNode(
      graph,
      api::shader_registry().get_shader_info(kernel_name.str()),
      global_size,
      local_size,
      {{out, api::MemoryAccessType::WRITE},
       {{arg1, arg2}, api::MemoryAccessType::READ}},
      {t_out.gpu_sizes_ubo(),
       t_in1.gpu_sizes_ubo(),
       t_in2.gpu_sizes_ubo(),
       graph.create_params_buffer(alpha_val)}))
```

instead of

```
ArithmeticParams block{
      get_size_as_ivec4(t_out),
      get_size_as_ivec4(t_in1),
      get_size_as_ivec4(t_in2),
      alpha_val,
  };
  api::UniformParamsBuffer params(graph.context(), block);

  graph.execute_nodes().emplace_back(new ExecuteNode(
      graph,
      shader,
      global_size,
      local_size,
      {{out, api::MemoryAccessType::WRITE},
       {{arg1, arg2}, api::MemoryAccessType::READ}},
      std::move(params)));
```

Another consideration is that pytorch/pytorch#115948 which was landed fairly recently enables much more expressive shader templates through the use of Python code blocks in the GLSL template. This enables shader templates that can easily express variants for different data types, packing structures, etc. Introducing a new shader library provides the opportunity to rewrite the shaders in PyTorch Vulkan in a more generic and extensible way.

Differential Revision: [D54754545](https://our.internmc.facebook.com/intern/diff/D54754545/)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D54754545

…s dynamic shapes"

## Context

pytorch/pytorch#121598 introduces the ability to support dynamic shapes through tensor metadata updates.

The idea is fairly simple. Instead of shaders accepting a UBO with size data for all arguments:

```
layout(set = 0, binding = 2) uniform PRECISION restrict Block {
  ivec4 output_sizes;
  ivec4 other_sizes;
  float alpha;
}
```

Shaders will accept separate UBOs for each piece of tensor metadata:

```
layout(set = 0, binding = 3) uniform PRECISION restrict OutSizes {
  ivec4 data;
}
out_sizes;

layout(set = 0, binding = 4) uniform PRECISION restrict InSizes {
  ivec4 data;
}
in_sizes;

layout(set = 0, binding = 5) uniform PRECISION restrict OtherSizes {
  ivec4 data;
}
other_sizes;

layout(set = 0, binding = 6) uniform PRECISION restrict Alpha {
  float data;
}
alpha;
```

Each UBO will be owned and maintained by the corresponding `vTensor` instance. To support a graph input resize, every tensor in the graph only needs to update their metadata UBOs via the `tensor.virtual_resize(new_sizes)` call. Shader dispatches in subsequent command buffer submissions will then see the updated metadata and execute as if the tensor were the updated sizes.

This changeset introduces a new shader library for the Vulkan graph runtime that enables dynamic shapes through this technique in favor of relying on the shader library from PyTorch Vulkan.


## Considerations

Technically, the UBO update technique can be applied to the shaders from PyTorch Vulkan as well. If that's the case, why introduce a new shader library for the graph runtime?

The primary motivation is code quality.

First, having `vTensor` supply UBOs for their own metadata greatly reduces the need to have operator specifc ad-hoc `Params` structs to organize arguments to write into a `api::UniformParamsBuffer`.

Constructing an `ExecuteNode` for binary operators is now

```
  graph.execute_nodes().emplace_back(new ExecuteNode(
      graph,
      api::shader_registry().get_shader_info(kernel_name.str()),
      global_size,
      local_size,
      {{out, api::MemoryAccessType::WRITE},
       {{arg1, arg2}, api::MemoryAccessType::READ}},
      {t_out.gpu_sizes_ubo(),
       t_in1.gpu_sizes_ubo(),
       t_in2.gpu_sizes_ubo(),
       graph.create_params_buffer(alpha_val)}))
```

instead of

```
ArithmeticParams block{
      get_size_as_ivec4(t_out),
      get_size_as_ivec4(t_in1),
      get_size_as_ivec4(t_in2),
      alpha_val,
  };
  api::UniformParamsBuffer params(graph.context(), block);

  graph.execute_nodes().emplace_back(new ExecuteNode(
      graph,
      shader,
      global_size,
      local_size,
      {{out, api::MemoryAccessType::WRITE},
       {{arg1, arg2}, api::MemoryAccessType::READ}},
      std::move(params)));
```

Another consideration is that pytorch/pytorch#115948 which was landed fairly recently enables much more expressive shader templates through the use of Python code blocks in the GLSL template. This enables shader templates that can easily express variants for different data types, packing structures, etc. Introducing a new shader library provides the opportunity to rewrite the shaders in PyTorch Vulkan in a more generic and extensible way.

Differential Revision: [D54754545](https://our.internmc.facebook.com/intern/diff/D54754545/)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D54754545

…s dynamic shapes"

## Context

pytorch/pytorch#121598 introduces the ability to support dynamic shapes through tensor metadata updates.

The idea is fairly simple. Instead of shaders accepting a UBO with size data for all arguments:

```
layout(set = 0, binding = 2) uniform PRECISION restrict Block {
  ivec4 output_sizes;
  ivec4 other_sizes;
  float alpha;
}
```

Shaders will accept separate UBOs for each piece of tensor metadata:

```
layout(set = 0, binding = 3) uniform PRECISION restrict OutSizes {
  ivec4 data;
}
out_sizes;

layout(set = 0, binding = 4) uniform PRECISION restrict InSizes {
  ivec4 data;
}
in_sizes;

layout(set = 0, binding = 5) uniform PRECISION restrict OtherSizes {
  ivec4 data;
}
other_sizes;

layout(set = 0, binding = 6) uniform PRECISION restrict Alpha {
  float data;
}
alpha;
```

Each UBO will be owned and maintained by the corresponding `vTensor` instance. To support a graph input resize, every tensor in the graph only needs to update their metadata UBOs via the `tensor.virtual_resize(new_sizes)` call. Shader dispatches in subsequent command buffer submissions will then see the updated metadata and execute as if the tensor were the updated sizes.

This changeset introduces a new shader library for the Vulkan graph runtime that enables dynamic shapes through this technique in favor of relying on the shader library from PyTorch Vulkan.


## Considerations

Technically, the UBO update technique can be applied to the shaders from PyTorch Vulkan as well. If that's the case, why introduce a new shader library for the graph runtime?

The primary motivation is code quality.

First, having `vTensor` supply UBOs for their own metadata greatly reduces the need to have operator specifc ad-hoc `Params` structs to organize arguments to write into a `api::UniformParamsBuffer`.

Constructing an `ExecuteNode` for binary operators is now

```
  graph.execute_nodes().emplace_back(new ExecuteNode(
      graph,
      api::shader_registry().get_shader_info(kernel_name.str()),
      global_size,
      local_size,
      {{out, api::MemoryAccessType::WRITE},
       {{arg1, arg2}, api::MemoryAccessType::READ}},
      {t_out.gpu_sizes_ubo(),
       t_in1.gpu_sizes_ubo(),
       t_in2.gpu_sizes_ubo(),
       graph.create_params_buffer(alpha_val)}))
```

instead of

```
ArithmeticParams block{
      get_size_as_ivec4(t_out),
      get_size_as_ivec4(t_in1),
      get_size_as_ivec4(t_in2),
      alpha_val,
  };
  api::UniformParamsBuffer params(graph.context(), block);

  graph.execute_nodes().emplace_back(new ExecuteNode(
      graph,
      shader,
      global_size,
      local_size,
      {{out, api::MemoryAccessType::WRITE},
       {{arg1, arg2}, api::MemoryAccessType::READ}},
      std::move(params)));
```

Another consideration is that pytorch/pytorch#115948 which was landed fairly recently enables much more expressive shader templates through the use of Python code blocks in the GLSL template. This enables shader templates that can easily express variants for different data types, packing structures, etc. Introducing a new shader library provides the opportunity to rewrite the shaders in PyTorch Vulkan in a more generic and extensible way.

Differential Revision: [D54754545](https://our.internmc.facebook.com/intern/diff/D54754545/)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D54754545

SS-JIA added a commit that referenced this pull request Mar 13, 2024
…hapes

Pull Request resolved: #2366

## Context

pytorch/pytorch#121598 introduces the ability to support dynamic shapes through tensor metadata updates.

The idea is fairly simple. Instead of shaders accepting a UBO with size data for all arguments:

```
layout(set = 0, binding = 2) uniform PRECISION restrict Block {
  ivec4 output_sizes;
  ivec4 other_sizes;
  float alpha;
}
```

Shaders will accept separate UBOs for each piece of tensor metadata:

```
layout(set = 0, binding = 3) uniform PRECISION restrict OutSizes {
  ivec4 data;
}
out_sizes;

layout(set = 0, binding = 4) uniform PRECISION restrict InSizes {
  ivec4 data;
}
in_sizes;

layout(set = 0, binding = 5) uniform PRECISION restrict OtherSizes {
  ivec4 data;
}
other_sizes;

layout(set = 0, binding = 6) uniform PRECISION restrict Alpha {
  float data;
}
alpha;
```

Each UBO will be owned and maintained by the corresponding `vTensor` instance. To support a graph input resize, every tensor in the graph only needs to update their metadata UBOs via the `tensor.virtual_resize(new_sizes)` call. Shader dispatches in subsequent command buffer submissions will then see the updated metadata and execute as if the tensor were the updated sizes.

This changeset introduces a new shader library for the Vulkan graph runtime that enables dynamic shapes through this technique in favor of relying on the shader library from PyTorch Vulkan.


## Considerations

Technically, the UBO update technique can be applied to the shaders from PyTorch Vulkan as well. If that's the case, why introduce a new shader library for the graph runtime?

The primary motivation is code quality.

First, having `vTensor` supply UBOs for their own metadata greatly reduces the need to have operator specifc ad-hoc `Params` structs to organize arguments to write into a `api::UniformParamsBuffer`.

Constructing an `ExecuteNode` for binary operators is now

```
  graph.execute_nodes().emplace_back(new ExecuteNode(
      graph,
      api::shader_registry().get_shader_info(kernel_name.str()),
      global_size,
      local_size,
      {{out, api::MemoryAccessType::WRITE},
       {{arg1, arg2}, api::MemoryAccessType::READ}},
      {t_out.gpu_sizes_ubo(),
       t_in1.gpu_sizes_ubo(),
       t_in2.gpu_sizes_ubo(),
       graph.create_params_buffer(alpha_val)}))
```

instead of

```
ArithmeticParams block{
      get_size_as_ivec4(t_out),
      get_size_as_ivec4(t_in1),
      get_size_as_ivec4(t_in2),
      alpha_val,
  };
  api::UniformParamsBuffer params(graph.context(), block);

  graph.execute_nodes().emplace_back(new ExecuteNode(
      graph,
      shader,
      global_size,
      local_size,
      {{out, api::MemoryAccessType::WRITE},
       {{arg1, arg2}, api::MemoryAccessType::READ}},
      std::move(params)));
```

Another consideration is that pytorch/pytorch#115948 which was landed fairly recently enables much more expressive shader templates through the use of Python code blocks in the GLSL template. This enables shader templates that can easily express variants for different data types, packing structures, etc. Introducing a new shader library provides the opportunity to rewrite the shaders in PyTorch Vulkan in a more generic and extensible way.
ghstack-source-id: 218421178
@exported-using-ghexport

Differential Revision: [D54754545](https://our.internmc.facebook.com/intern/diff/D54754545/)
@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 835279e.

jorgep31415 added a commit that referenced this pull request Mar 13, 2024
Missed this in #2366

Differential Revision: [D54880024](https://our.internmc.facebook.com/intern/diff/D54880024/)

[ghstack-poisoned]
jorgep31415 added a commit that referenced this pull request Mar 13, 2024
Missed this in #2366

Differential Revision: [D54880024](https://our.internmc.facebook.com/intern/diff/D54880024/)

ghstack-source-id: 218593311
Pull Request resolved: #2418
facebook-github-bot pushed a commit that referenced this pull request Mar 14, 2024
Summary:
Pull Request resolved: #2418

Missed this in #2366
ghstack-source-id: 218593311
exported-using-ghexport
bypass-github-export-checks

Reviewed By: SS-JIA

Differential Revision: D54880024

fbshipit-source-id: c4e19d8fefbb9d2fc4547ec2edc236060638da5e
@SS-JIA SS-JIA deleted the gh/SS-JIA/11/head branch January 24, 2025 19:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported Merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants