You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Use relative links in llm/getting-started.md (#3244) (#3310)
Summary:
Use relative markdown links instead of full URLs. This way, the docs will always point to a consistent branch.
Pull Request resolved: #3244
Test Plan: Clicked on all modified links in the rendered docs preview: https://docs-preview.pytorch.org/pytorch/executorch/3244/llm/getting-started.html
Reviewed By: Gasoonjia
Differential Revision: D56479234
Pulled By: dbort
fbshipit-source-id: 45fb25f017c73df8606c3fb861acafbdd82fec8c
(cherry picked from commit b560864)
Co-authored-by: Dave Bort <[email protected]>
Copy file name to clipboardExpand all lines: docs/source/llm/getting-started.md
+18-23Lines changed: 18 additions & 23 deletions
Original file line number
Diff line number
Diff line change
@@ -90,7 +90,7 @@ cd ../..
90
90
:::
91
91
::::
92
92
93
-
For more information, see [Setting Up ExecuTorch](https://pytorch.org/executorch/stable/getting-started-setup.html).
93
+
For more information, see [Setting Up ExecuTorch](../getting-started-setup.md).
94
94
95
95
96
96
## Running a Large Language Model Locally
@@ -185,7 +185,7 @@ with open("nanogpt.pte", "wb") as file:
185
185
186
186
To export, run the script with `python export_nanogpt.py` (or python3, as appropriate for your environment). It will generate a `nanogpt.pte` file in the current directory.
187
187
188
-
For more information, see [Exporting to ExecuTorch](https://pytorch.org/executorch/main/tutorials/export-to-executorch-tutorial.html) and
188
+
For more information, see [Exporting to ExecuTorch](../tutorials/export-to-executorch-tutorial) and
for the [XNNPACK Backend](../tutorial-xnnpack-delegate-lowering.md) and [CoreML
594
+
Backend](../build-run-coreml.md).
599
595
600
596
## Quantization
601
597
@@ -681,7 +677,7 @@ target_link_libraries(
681
677
xnnpack_backend) # Provides the XNNPACK CPU acceleration backend
682
678
```
683
679
684
-
For more information, see [Quantization in ExecuTorch](https://pytorch.org/executorch/stable/quantization-overview.html).
680
+
For more information, see [Quantization in ExecuTorch](../quantization-overview.md).
685
681
686
682
## Profiling and Debugging
687
683
After lowering a model by calling `to_backend()`, you may want to see what got delegated and what didn’t. ExecuTorch
@@ -759,7 +755,7 @@ Through the ExecuTorch SDK, users are able to profile model execution, giving ti
759
755
760
756
##### ETRecord generation (Optional)
761
757
762
-
An ETRecord is an artifact generated at the time of export that contains model graphs and source-level metadata linking the ExecuTorch program to the original PyTorch model. You can view all profiling events without an ETRecord, though with an ETRecord, you will also be able to link each event to the types of operators being executed, module hierarchy, and stack traces of the original PyTorch source code. For more information, see [https://pytorch.org/executorch/main/sdk-etrecord.html](https://pytorch.org/executorch/main/sdk-etrecord.html)
758
+
An ETRecord is an artifact generated at the time of export that contains model graphs and source-level metadata linking the ExecuTorch program to the original PyTorch model. You can view all profiling events without an ETRecord, though with an ETRecord, you will also be able to link each event to the types of operators being executed, module hierarchy, and stack traces of the original PyTorch source code. For more information, see [the ETRecord docs](../sdk-etrecord.md).
763
759
764
760
765
761
In your export script, after calling `to_edge()` and `to_executorch()`, call `generate_etrecord()` with the `EdgeProgramManager` from `to_edge()` and the `ExecuTorchProgramManager` from `to_executorch()`. Make sure to copy the `EdgeProgramManager`, as the call to `to_backend()` mutates the graph in-place.
@@ -781,7 +777,7 @@ Run the export script and the ETRecord will be generated as `etrecord.bin`.
781
777
782
778
##### ETDump generation
783
779
784
-
An ETDump is an artifact generated at runtime containing a trace of the model execution. For more information, see [https://pytorch.org/executorch/main/sdk-etdump.html](https://pytorch.org/executorch/main/sdk-etdump.html)
780
+
An ETDump is an artifact generated at runtime containing a trace of the model execution. For more information, see [the ETDump docs](../sdk-etdump.md).
785
781
786
782
Include the ETDump header in your code.
787
783
```cpp
@@ -851,7 +847,7 @@ This prints the performance data in a tabular format in “inspector_out.txt”,
<ahref="../_static/img/llm_manual_print_data_tabular.png"target="_blank">View in full size</a>
853
849
854
-
To learn more about the Inspector and the rich functionality it provides, see the [Inspector API Reference](https://pytorch.org/executorch/main/sdk-inspector.html).
850
+
To learn more about the Inspector and the rich functionality it provides, see the [Inspector API Reference](../sdk-inspector.md).
855
851
856
852
## Custom Kernels
857
853
With the ExecuTorch custom operator APIs, custom operator and kernel authors can easily bring in their kernel into PyTorch/ExecuTorch.
0 commit comments