Skip to content

Update documentation flow and add placeholders #8287

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion backends/vulkan/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# ExecuTorch Vulkan Delegate
# Vulkan Backend

The ExecuTorch Vulkan delegate is a native GPU delegate for ExecuTorch that is
built on top of the cross-platform Vulkan GPU API standard. It is primarily
Expand Down
2 changes: 1 addition & 1 deletion docs/source/concepts.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# ExecuTorch Concepts
# Concepts
This page provides an overview of key concepts and terms used throughout the ExecuTorch documentation. It is intended to help readers understand the terminology and concepts used in PyTorch Edge and ExecuTorch.

## Concepts Map
Expand Down
2 changes: 1 addition & 1 deletion docs/source/debug-backend-delegate.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Debug Backend Delegate
# Debugging Delegation

We provide a list of util functions to give users insights on what happened to the graph modules during the `to_backend()` stage.

Expand Down
2 changes: 1 addition & 1 deletion docs/source/getting-started-architecture.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# High-level Architecture and Components of ExecuTorch
# Architecture and Components

This page describes the technical architecture of ExecuTorch and its individual components. This document is targeted towards engineers who are deploying PyTorch model onto edge devices.

Expand Down
136 changes: 65 additions & 71 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -81,83 +81,53 @@ Topics in this section will help you get started with ExecuTorch.
.. toctree::
:glob:
:maxdepth: 1
:caption: Getting Started
:caption: Usage
:hidden:

getting-started
export-overview
runtime-build-and-cross-compilation
getting-started-faqs
using-executorch-export
using-executorch-android
using-executorch-ios
using-executorch-cpp
using-executorch-troubleshooting

.. toctree::
:glob:
:maxdepth: 1
:caption: Tutorials
:caption: Backends
:hidden:

tutorials/export-to-executorch-tutorial
running-a-model-cpp-tutorial
extension-module
extension-tensor
tutorials/devtools-integration-tutorial
apple-runtime
demo-apps-ios
demo-apps-android
examples-end-to-end-to-lower-model-to-delegate
tutorial-xnnpack-delegate-lowering
build-run-vulkan
..
Alphabetical by backend name. Be sure to keep the same order in the
customcarditem entries below.
executorch-arm-delegate-tutorial
build-run-coreml
build-run-mediatek-backend
build-run-mps
build-run-qualcomm-ai-engine-direct-backend
build-run-xtensa

.. toctree::
:glob:
:maxdepth: 2
:caption: Working with LLMs
:hidden:

Llama <llm/llama>
Llama on Android <llm/llama-demo-android>
Llama on iOS <llm/llama-demo-ios>
Llama on Android via Qualcomm backend <llm/build-run-llama3-qualcomm-ai-engine-direct-backend>
Intro to LLMs in Executorch <llm/getting-started>

.. toctree::
:glob:
:maxdepth: 1
:caption: API Reference
:hidden:

export-to-executorch-api-reference
executorch-runtime-api-reference
runtime-python-api-reference
api-life-cycle
native-delegates-executorch-xnnpack-delegate
native-delegates-executorch-coreml-delegate
native-delegates-executorch-mps-delegate
native-delegates-executorch-vulkan-delegate
native-delegates-executorch-arm-ethos-u-delegate
native-delegates-executorch-qualcomm-delegate
native-delegates-executorch-mediatek-delegate
native-delegates-executorch-cadence-delegate

.. toctree::
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we have too many tutorials here

can we remove everything in Tutorials section, and gradually increase if necessary?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good. I 100% agree on the tutorials. A lot of what's in there is really reference material and not a tutorial. I'm happy to remove them for now.

:glob:
:maxdepth: 1
:caption: IR Specification
:caption: Tutorials
:hidden:

ir-exir
ir-ops-set-definition

.. toctree::
:glob:
:maxdepth: 1
:caption: Compiler Entry Points
:caption: Developer Tools
:hidden:

compiler-delegate-and-partitioner
compiler-backend-dialect
compiler-custom-compiler-passes
compiler-memory-planning
devtools-overview
bundled-io
etrecord
etdump
runtime-profiling
model-debugging
model-inspector
memory-planning-inspection
delegate-debugging
devtools-tutorial

.. toctree::
:glob:
Expand All @@ -171,6 +141,17 @@ Topics in this section will help you get started with ExecuTorch.
portable-cpp-programming
pte-file-format

.. toctree::
:glob:
:maxdepth: 1
:caption: API Reference
:hidden:

export-to-executorch-api-reference
executorch-runtime-api-reference
runtime-python-api-reference
api-life-cycle

.. toctree::
:glob:
:maxdepth: 1
Expand All @@ -189,34 +170,47 @@ Topics in this section will help you get started with ExecuTorch.
kernel-library-custom-aten-kernel
kernel-library-selective-build

.. toctree::
:glob:
:maxdepth: 2
:caption: Working with LLMs
:hidden:

Llama <llm/llama>
Llama on Android <llm/llama-demo-android>
Llama on iOS <llm/llama-demo-ios>
Llama on Android via Qualcomm backend <llm/build-run-llama3-qualcomm-ai-engine-direct-backend>
Intro to LLMs in Executorch <llm/getting-started>

.. toctree::
:glob:
:maxdepth: 1
:caption: Backend Delegates
:caption: Backend Development
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice, i like it

:hidden:

native-delegates-executorch-xnnpack-delegate
native-delegates-executorch-vulkan-delegate
backend-delegates-integration
backend-delegates-dependencies
debug-backend-delegate

.. toctree::
:glob:
:maxdepth: 1
:caption: Developer Tools
:caption: IR Specification
:hidden:

devtools-overview
bundled-io
etrecord
etdump
runtime-profiling
model-debugging
model-inspector
memory-planning-inspection
delegate-debugging
devtools-tutorial
ir-exir
ir-ops-set-definition

.. toctree::
:glob:
:maxdepth: 1
:caption: Compiler Entry Points
:hidden:

compiler-delegate-and-partitioner
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the first one can actually go to "Backend Development" section

compiler-backend-dialect
compiler-custom-compiler-passes
compiler-memory-planning

.. toctree::
:glob:
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# ARM Ethos-U Backend

Placeholder
3 changes: 3 additions & 0 deletions docs/source/native-delegates-executorch-cadence-delegate.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Cadence Backend

Placeholder
3 changes: 3 additions & 0 deletions docs/source/native-delegates-executorch-coreml-delegate.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Core ML Backend

Placeholder for Core ML delegate docs
3 changes: 3 additions & 0 deletions docs/source/native-delegates-executorch-mediatek-delegate.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# MediaTek Backend

Placeholder
3 changes: 3 additions & 0 deletions docs/source/native-delegates-executorch-mps-delegate.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# MPS Backend

Placeholder
3 changes: 3 additions & 0 deletions docs/source/native-delegates-executorch-qualcomm-delegate.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Qualcomm Backend

Placeholder
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you basically moving contents over from https://pytorch.org/executorch/stable/build-run-qualcomm-ai-engine-direct-backend.html to this? And do adjustments, along the way?

Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# ExecuTorch XNNPACK delegate
# XNNPACK Backend

This is a high-level overview of the ExecuTorch XNNPACK backend delegate. This high performance delegate is aimed to reduce CPU inference latency for ExecuTorch models. We will provide a brief introduction to the XNNPACK library and explore the delegate’s overall architecture and intended use cases.

Expand Down
Binary file added docs/source/tutorials_source/bundled_program.bp
Binary file not shown.
3 changes: 3 additions & 0 deletions docs/source/using-executorch-android.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Using ExecuTorch on Android

Placeholder for top-level Android documentation
3 changes: 3 additions & 0 deletions docs/source/using-executorch-cpp.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Using ExecuTorch with C++

Placeholder for top-level C++ documentation
3 changes: 3 additions & 0 deletions docs/source/using-executorch-export.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Model Export

Placeholder for top-level export documentation
3 changes: 3 additions & 0 deletions docs/source/using-executorch-ios.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Using ExecuTorch on iOS

Placeholder for top-level iOS documentation
3 changes: 3 additions & 0 deletions docs/source/using-executorch-troubleshooting.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Troubleshooting, Profiling, and Optimization

Placeholder for top-level troubleshooting, profiling, and devtool docs
Loading