Skip to content

[ExecuTorch][Weight Sharing] Track Named Data Store in EdgeProgramManager #9293

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Mar 14, 2025

Conversation

pytorchbot
Copy link
Collaborator

This PR was created by the merge bot to help merge the original PR into the main branch.
ghstack PR number: #9151 by @mcr229
^ Please use this as the source of truth for the PR details, comments, and reviews
ghstack PR base: https://github.com/pytorch/executorch/tree/gh/mcr229/7/base
ghstack PR head: https://github.com/pytorch/executorch/tree/gh/mcr229/7/head
Merge bot PR base: https://github.com/pytorch/executorch/tree/main
Merge bot PR head: https://github.com/pytorch/executorch/tree/gh/mcr229/7/orig
@diff-train-skip-merge

…ager

Pull Request resolved: #9151

We enable Backends to return Named Data by adding NamedDataStoreOutput to the preprocess result. This is a completely BC change, as no backends with an implemented preprocess will see any change if nothing is explicitly implemented.

For backend developers to leverage the new NamedDataStore, they can initialize a new NamedDataStore() within preprocess, add_named_data to the data store, and return the NamedDataStore.get_named_data_store_output() in the preprocess result like such:

```
def preprocess(ExportedProgram, List[CompileSpecs]) -> PreprocessResult:
    named_data_store = NamedDataStore()

    for node in exported_program.graph.nodes:
        named_data_store.add_named_data("name", bytes)

    return PreprocessResult(
        processed_bytes=bytes,
        debug_handle_map={},
        data_store_output= named_data_store.get_named_data_store_output()
    )
```


Under the hood, the data store output is embedded in the loweredbackendmodule, (serializing loweredbackendmodule by itself with the a named_data_store_output is still a todo). But via the EdgeProgramManager path, we add the named_data_store_outputs to the edge_program_manger's named data store to keep track of all the named data returned by backends.
ghstack-source-id: 271732049
@exported-using-ghexport

Differential Revision: [D70451660](https://our.internmc.facebook.com/intern/diff/D70451660/)
Copy link

pytorch-bot bot commented Mar 14, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/9293

Note: Links to docs will display an error until the docs builds have been completed.

❌ 2 New Failures

As of commit aef99e3 with merge base 630d0cc (image):

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Mar 14, 2025
@SS-JIA SS-JIA added the release notes: xnnpack Changes to the XNNPack backend delegate label Mar 14, 2025
@SS-JIA SS-JIA merged commit d44d6cd into main Mar 14, 2025
79 of 82 checks passed
@SS-JIA SS-JIA deleted the gh/mcr229/7/orig branch March 14, 2025 23:27
DannyYuyang-quic pushed a commit to CodeLinaro/executorch that referenced this pull request Apr 2, 2025
…ager (pytorch#9293)

This PR was created by the merge bot to help merge the original PR into
the main branch.
ghstack PR number: pytorch#9151 by
@mcr229
^ Please use this as the source of truth for the PR details, comments,
and reviews
ghstack PR base:
https://github.com/pytorch/executorch/tree/gh/mcr229/7/base
ghstack PR head:
https://github.com/pytorch/executorch/tree/gh/mcr229/7/head
Merge bot PR base: https://github.com/pytorch/executorch/tree/main
Merge bot PR head:
https://github.com/pytorch/executorch/tree/gh/mcr229/7/orig
@diff-train-skip-merge

Co-authored-by: Max Ren <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. release notes: xnnpack Changes to the XNNPack backend delegate
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants