Skip to content

Commit e12d6be

Browse files
committed
Use relative links in llm/getting-started.md
Use relative markdown links instead of full URLs. This way, the docs will always point to a consistent branch.
1 parent 6c36f10 commit e12d6be

File tree

1 file changed

+18
-23
lines changed

1 file changed

+18
-23
lines changed

docs/source/llm/getting-started.md

+18-23
Original file line numberDiff line numberDiff line change
@@ -77,7 +77,7 @@ cd ../..
7777
:::
7878
::::
7979

80-
For more information, see [Setting Up ExecuTorch](https://pytorch.org/executorch/stable/getting-started-setup.html).
80+
For more information, see [Setting Up ExecuTorch](../getting-started-setup.md).
8181

8282

8383
## Running a Large Language Model Locally
@@ -161,7 +161,7 @@ with open("nanogpt.pte", "wb") as file:
161161

162162
To export, run the script with `python export_nanogpt.py` (or python3, as appropriate for your environment). It will generate a `nanogpt.pte` file in the current directory.
163163

164-
For more information, see [Exporting to ExecuTorch](https://pytorch.org/executorch/main/tutorials/export-to-executorch-tutorial.html) and
164+
For more information, see [Exporting to ExecuTorch](../tutorials/export-to-executorch-tutorial) and
165165
[torch.export](https://pytorch.org/docs/stable/export.html).
166166

167167
### Step 2. Invoking the Runtime
@@ -305,8 +305,8 @@ curl -O https://raw.githubusercontent.com/GregoryComer/et-tutorials/quantization
305305
curl -O https://raw.githubusercontent.com/GregoryComer/et-tutorials/quantization/nanogpt/basic_sampler.h
306306
```
307307

308-
To learn more, see [Running an ExecuTorch Model in C++](https://pytorch.org/executorch/main/running-a-model-cpp-tutorial.html)
309-
and the [ExecuTorch Runtime API Reference](https://pytorch.org/executorch/main/executorch-runtime-api-reference.html).
308+
To learn more, see [Running an ExecuTorch Model in C++](../running-a-model-cpp-tutorial.md)
309+
and the [ExecuTorch Runtime API Reference](../executorch-runtime-api-reference.md).
310310

311311
### Building and Running
312312

@@ -481,11 +481,9 @@ target_link_libraries(
481481
xnnpack_backend) # Provides the XNNPACK CPU acceleration backend
482482
```
483483

484-
Keep the rest of the code the same. For more details refer to
485-
[Exporting to ExecuTorch](https://pytorch.org/executorch/main/llm/getting-started.html#step-1-exporting-to-executorch)
486-
and
487-
[Invoking the Runtime](https://pytorch.org/executorch/main/llm/getting-started.html#step-2-invoking-the-runtime)
488-
for more details
484+
Keep the rest of the code the same. For more details refer to [Exporting
485+
to ExecuTorch](#step-1-exporting-to-executorch) and [Invoking the
486+
Runtime](#step-2-invoking-the-runtime) for more details
489487

490488
At this point, the working directory should contain the following files:
491489

@@ -520,10 +518,8 @@ Once upon a time, there was a man who was a member of the military...
520518

521519

522520
For more information regarding backend delegateion, see the ExecuTorch guides
523-
for the
524-
[XNNPACK Backend](https://pytorch.org/executorch/stable/tutorial-xnnpack-delegate-lowering.html)
525-
and
526-
[CoreML Backend](https://pytorch.org/executorch/stable/build-run-coreml.html).
521+
for the [XNNPACK Backend](../tutorial-xnnpack-delegate-lowering.md) and [CoreML
522+
Backend](../build-run-coreml.md).
527523

528524
## Quantization
529525

@@ -609,7 +605,7 @@ target_link_libraries(
609605
xnnpack_backend) # Provides the XNNPACK CPU acceleration backend
610606
```
611607

612-
For more information, see [Quantization in ExecuTorch](https://pytorch.org/executorch/stable/quantization-overview.html).
608+
For more information, see [Quantization in ExecuTorch](../quantization-overview.md).
613609

614610
## Profiling and Debugging
615611
After lowering a model by calling `to_backend()`, you may want to see what got delegated and what didn’t. ExecuTorch
@@ -687,7 +683,7 @@ Through the ExecuTorch SDK, users are able to profile model execution, giving ti
687683
688684
##### ETRecord generation (Optional)
689685
690-
An ETRecord is an artifact generated at the time of export that contains model graphs and source-level metadata linking the ExecuTorch program to the original PyTorch model. You can view all profiling events without an ETRecord, though with an ETRecord, you will also be able to link each event to the types of operators being executed, module hierarchy, and stack traces of the original PyTorch source code. For more information, see [https://pytorch.org/executorch/main/sdk-etrecord.html](https://pytorch.org/executorch/main/sdk-etrecord.html)
686+
An ETRecord is an artifact generated at the time of export that contains model graphs and source-level metadata linking the ExecuTorch program to the original PyTorch model. You can view all profiling events without an ETRecord, though with an ETRecord, you will also be able to link each event to the types of operators being executed, module hierarchy, and stack traces of the original PyTorch source code. For more information, see [the ETRecord docs](../sdk-etrecord.md).
691687
692688
693689
In your export script, after calling `to_edge()` and `to_executorch()`, call `generate_etrecord()` with the `EdgeProgramManager` from `to_edge()` and the `ExecuTorchProgramManager` from `to_executorch()`. Make sure to copy the `EdgeProgramManager`, as the call to `to_backend()` mutates the graph in-place.
@@ -709,7 +705,7 @@ Run the export script and the ETRecord will be generated as `etrecord.bin`.
709705
710706
##### ETDump generation
711707
712-
An ETDump is an artifact generated at runtime containing a trace of the model execution. For more information, see [https://pytorch.org/executorch/main/sdk-etdump.html](https://pytorch.org/executorch/main/sdk-etdump.html)
708+
An ETDump is an artifact generated at runtime containing a trace of the model execution. For more information, see [the ETDump docs](../sdk-etdump.md).
713709
714710
Include the ETDump header in your code.
715711
```cpp
@@ -779,7 +775,7 @@ This prints the performance data in a tabular format in “inspector_out.txt”,
779775
![](../_static/img/llm_manual_print_data_tabular.png)
780776
<a href="../_static/img/llm_manual_print_data_tabular.png" target="_blank">View in full size</a>
781777

782-
To learn more about the Inspector and the rich functionality it provides, see the [Inspector API Reference](https://pytorch.org/executorch/main/sdk-inspector.html).
778+
To learn more about the Inspector and the rich functionality it provides, see the [Inspector API Reference](../sdk-inspector.md).
783779

784780
## Custom Kernels
785781
With the ExecuTorch custom operator APIs, custom operator and kernel authors can easily bring in their kernel into PyTorch/ExecuTorch.
@@ -857,7 +853,7 @@ torch.ops.load_library("libcustom_linear.so")
857853
Once loaded, you can use the custom operator in PyTorch code.
858854

859855
For more information, see [PyTorch Custom Operators](https://pytorch.org/tutorials/advanced/torch_script_custom_ops.html) and
860-
and [ExecuTorch Kernel Registration](https://pytorch.org/executorch/stable/kernel-library-custom-aten-kernel.html).
856+
and [ExecuTorch Kernel Registration](../kernel-library-custom-aten-kernel.md).
861857

862858
### Using a Custom Operator in a Model
863859

@@ -879,9 +875,8 @@ def replace_linear_with_custom_linear(module):
879875

880876
The remaining steps are the same as the normal flow. Now you can run this module in eager mode as well as export to ExecuTorch.
881877

882-
## How to build Mobile Apps
883-
You can execute an LLM using ExecuTorch on iOS and Android.
878+
## How to Build Mobile Apps
879+
See the instructions for building and running LLMs using ExecuTorch on iOS and Android.
884880

885-
**For iOS see the [iLLaMA App](https://pytorch.org/executorch/main/llm/llama-demo-ios.html).**
886-
887-
**For Android, see the [Android Sample App](https://pytorch.org/executorch/main/llm/llama-demo-android.html).**
881+
* **[iOS ExecuTorch LLaMA Demo App](llama-demo-ios.md)**
882+
* **[Android ExecuTorch LLaMA Demo App](llama-demo-android.md)**

0 commit comments

Comments
 (0)