Skip to content

Commit 0b4888b

Browse files
authored
Merge pull request #1294 from pytorch/docgen_update
docs: Update docgen task
2 parents 8236218 + 12f39ac commit 0b4888b

File tree

16 files changed

+312
-75
lines changed

16 files changed

+312
-75
lines changed

.github/workflows/docgen.yml

Lines changed: 46 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -10,9 +10,9 @@ on:
1010

1111
jobs:
1212
build-docs:
13-
runs-on: ubuntu-18.04
13+
runs-on: ubuntu-20.04
1414
container:
15-
image: docker.pkg.github.com/pytorch/tensorrt/docgen:latest
15+
image: nvidia/cuda:11.3.1-devel-ubuntu20.04
1616
credentials:
1717
username: ${{ github.actor }}
1818
password: ${{ secrets.GITHUB_TOKEN }}
@@ -22,23 +22,64 @@ jobs:
2222
rm -rf /usr/share/dotnet
2323
rm -rf /opt/ghc
2424
rm -rf "/usr/local/share/boost"
25+
rm -rf /usr/local/cuda/cuda-*
26+
- name: Install Python
27+
run:
28+
apt update
29+
apt install -y gcc git curl wget make zlib1g-dev bzip2 libbz2-dev lzma lzma-dev libreadline-dev libsqlite3-dev libssl-dev libffi-dev doxygen pandoc
30+
mkdir -p /opt/circleci
31+
git clone https://github.com/pyenv/pyenv.git /opt/circleci/.pyenv
32+
export PYENV_ROOT="/opt/circleci/.pyenv"
33+
export PATH="$PYENV_ROOT/shims:$PYENV_ROOT/bin:$PATH"
34+
pyenv install 3.9.4
35+
pyenv global 3.9.4
36+
python3 -m pip install --upgrade pip
37+
python3 -m pip install wheel
2538
- uses: actions/checkout@v2
2639
with:
2740
ref: ${{github.head_ref}}
2841
- name: Get HEAD SHA
2942
id: vars
3043
run: echo "::set-output name=sha::$(git rev-parse --short HEAD)"
44+
- name: Get Bazel version
45+
id: bazel_info
46+
run: echo "::set-output name=version::$(cat .bazelversion)"
47+
- name: Install Bazel
48+
run: |
49+
wget -q https://github.com/bazelbuild/bazel/releases/download/${{ steps.bazel_info.outputs.version }}/bazel-${{ steps.bazel_info.outputs.version }}-linux-x86_64 -O /usr/bin/bazel
50+
chmod a+x /usr/bin/bazel
51+
- name: Install cudnn + tensorrt
52+
run: |
53+
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin
54+
mv cuda-ubuntu2004.pin /etc/apt/preferences.d/cuda-repository-pin-600
55+
apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/7fa2af80.pub
56+
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 536F8F1DE80F6A35
57+
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys A4B469963BF863CC
58+
add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/ /"
59+
apt-get update
60+
apt-get install -y libcudnn8 libcudnn8-dev
61+
62+
apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/3bf863cc.pub
63+
add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/ /"
64+
apt-get update
65+
66+
apt-get install -y libnvinfer8 libnvinfer-plugin8 libnvinfer-dev libnvinfer-plugin-dev
67+
- name: Install Torch
68+
run: |
69+
python3 -m pip install -r py/requirements.txt
3170
- name: Build Python Package
3271
run: |
33-
cp docker/WORKSPACE.docker WORKSPACE
72+
cp toolchains/ci_workspaces/WORKSPACE.x86_64 WORKSPACE
3473
cd py
35-
python3 setup.py install
74+
pip install -e .
75+
cd ..
3676
- name: Generate New Docs
3777
run: |
3878
cd docsrc
39-
pip3 install -r requirements.txt
79+
python3 -m pip install -r requirements.txt
4080
python3 -c "import torch_tensorrt; print(torch_tensorrt.__version__)"
4181
make html
82+
cd ..
4283
- uses: stefanzweifel/git-auto-commit-action@v4
4384
with:
4485
# Required

core/compiler.cpp

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -426,7 +426,8 @@ torch::jit::Module CompileGraph(const torch::jit::Module& mod, CompileSpec cfg)
426426
auto outputIsCollection = conversion::OutputIsCollection(g->block());
427427
if (cfg.partition_info.enabled &&
428428
(cfg.lower_info.forced_fallback_modules.size() == 0 &&
429-
cfg.partition_info.forced_fallback_operators.size() == 0 && isBlockConvertible) && !outputIsCollection) {
429+
cfg.partition_info.forced_fallback_operators.size() == 0 && isBlockConvertible) &&
430+
!outputIsCollection) {
430431
LOG_INFO("Skipping partitioning since model is fully supported");
431432
}
432433

docsrc/Makefile

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -37,8 +37,8 @@ endif
3737
rm -rf $(SOURCEDIR)/_tmp
3838

3939
html:
40-
mkdir -p $(SOURCEDIR)/_notebooks
41-
cp -r $(SOURCEDIR)/../notebooks/*.ipynb $(SOURCEDIR)/_notebooks
40+
# mkdir -p $(SOURCEDIR)/_notebooks
41+
# cp -r $(SOURCEDIR)/../notebooks/*.ipynb $(SOURCEDIR)/_notebooks
4242
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
4343
mkdir -p $(DESTDIR)
4444
cp -r $(BUILDDIR)/html/* $(DESTDIR)

docsrc/WORKSPACE.docs

Whitespace-only changes.
File renamed without changes.

docsrc/tutorials/getting_started_with_cpp_api.rst renamed to docsrc/getting_started/getting_started_with_cpp_api.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
1-
.. _getting_started:
1+
.. _getting_started_cpp:
22

3-
Getting Started with C++
4-
========================
3+
Using Torch-TensorRT in C++
4+
==============================
55

66
If you haven't already, acquire a tarball of the library by following the instructions in :ref:`Installation`
77

docsrc/tutorials/getting_started_with_python_api.rst renamed to docsrc/getting_started/getting_started_with_python_api.rst

Lines changed: 12 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,16 @@
33
Using Torch-TensorRT in Python
44
*******************************
55

6-
Torch-TensorRT Python API accepts a ```torch.nn.Module`` as an input. Under the hood, it uses ``torch.jit.script`` to convert the input module into a
7-
TorchScript module. To compile your input ```torch.nn.Module`` with Torch-TensorRT, all you need to do is provide the module and inputs
6+
The Torch-TensorRT Python API supports a number of unique usecases compared to the CLI and C++ APIs which solely support TorchScript compilation.
7+
8+
Torch-TensorRT Python API can accept a ``torch.nn.Module``, ``torch.jit.ScriptModule``, or ``torch.fx.GraphModule`` as an input.
9+
Depending on what is provided one of the two frontends (TorchScript or FX) will be selected to compile the module. Provided the
10+
module type is supported, users may explicitly set which frontend they would like to use using the ``ir`` flag for ``compile``.
11+
If given a ``torch.nn.Module`` and the ``ir`` flag is set to either ``default`` or ``torchscript`` the module will be run through
12+
``torch.jit.script`` to convert the input module into a TorchScript module.
13+
14+
15+
To compile your input ``torch.nn.Module`` with Torch-TensorRT, all you need to do is provide the module and inputs
816
to Torch-TensorRT and you will be returned an optimized TorchScript module to run or add into another PyTorch module. Inputs
917
is a list of ``torch_tensorrt.Input`` classes which define input's shape, datatype and memory format. You can also specify settings such as
1018
operating precision for the engine or target device. After compilation you can save the module just like any other module
@@ -46,6 +54,5 @@ to load in a deployment application. In order to load a TensorRT/TorchScript mod
4654
input_data = input_data.to("cuda").half()
4755
result = trt_ts_module(input_data)
4856
49-
Torch-TensorRT python API also provides ``torch_tensorrt.ts.compile`` which accepts a TorchScript module as input.
50-
The torchscript module can be obtained via scripting or tracing (refer to :ref:`creating_torchscript_module_in_python`). ``torch_tensorrt.ts.compile`` accepts a Torchscript module
51-
and a list of ``torch_tensorrt.Input`` classes.
57+
Torch-TensorRT Python API also provides ``torch_tensorrt.ts.compile`` which accepts a TorchScript module as input and ``torch_tensorrt.fx.compile`` which accepts a FX GraphModule as input.
58+

docsrc/index.rst

Lines changed: 39 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -22,53 +22,51 @@ More Information / System Architecture:
2222
Getting Started
2323
----------------
2424
* :ref:`installation`
25-
* :ref:`getting_started`
25+
* :ref:`getting_started_with_python_api`
26+
* :ref:`getting_started_cpp`
27+
28+
.. toctree::
29+
:caption: Getting Started
30+
:maxdepth: 1
31+
:hidden:
32+
33+
getting_started/installation
34+
getting_started/getting_started_with_python_api
35+
getting_started/getting_started_with_cpp_api
36+
37+
38+
Tutorials
39+
------------
40+
* :ref:`creating_a_ts_mod`
41+
* :ref:`getting_started_with_fx`
2642
* :ref:`ptq`
27-
* :ref:`torchtrtc`
28-
* :ref:`use_from_pytorch`
2943
* :ref:`runtime`
30-
* :ref:`using_dla`
3144
* :ref:`serving_torch_tensorrt_with_triton`
32-
* :ref:`user_guide`
45+
* :ref:`use_from_pytorch`
46+
* :ref:`using_dla`
47+
* :ref:`notebooks`
3348

3449
.. toctree::
35-
:caption: Getting Started
50+
:caption: Tutorials
3651
:maxdepth: 1
3752
:hidden:
3853

39-
tutorials/installation
40-
tutorials/getting_started_with_cpp_api
41-
tutorials/getting_started_with_python_api
4254
tutorials/creating_torchscript_module_in_python
55+
tutorials/getting_started_with_fx_path
4356
tutorials/ptq
44-
tutorials/torchtrtc
45-
tutorials/use_from_pytorch
4657
tutorials/runtime
47-
tutorials/using_dla
4858
tutorials/serving_torch_tensorrt_with_triton
49-
tutorials/getting_started_with_fx_path
50-
51-
.. toctree::
52-
:caption: Notebooks
53-
:maxdepth: 1
54-
:hidden:
55-
56-
_notebooks/CitriNet-example
57-
_notebooks/dynamic-shapes
58-
_notebooks/EfficientNet-example
59-
_notebooks/Hugging-Face-BERT
60-
_notebooks/lenet-getting-started
61-
_notebooks/Resnet50-example
62-
_notebooks/ssd-object-detection-demo
63-
_notebooks/vgg-qat
64-
59+
tutorials/use_from_pytorch
60+
tutorials/using_dla
61+
tutorials/notebooks
6562

6663
Python API Documenation
6764
------------------------
6865
* :ref:`torch_tensorrt_py`
6966
* :ref:`torch_tensorrt_logging_py`
7067
* :ref:`torch_tensorrt_ptq_py`
7168
* :ref:`torch_tensorrt_ts_py`
69+
* :ref:`torch_tensorrt_fx_py`
7270

7371
.. toctree::
7472
:caption: Python API Documenation
@@ -79,6 +77,7 @@ Python API Documenation
7977
py_api/logging
8078
py_api/ptq
8179
py_api/ts
80+
py_api/fx
8281

8382
C++ API Documenation
8483
----------------------
@@ -99,6 +98,18 @@ C++ API Documenation
9998
_cpp_api/namespace_torch_tensorrt__torchscript
10099
_cpp_api/namespace_torch_tensorrt__ptq
101100

101+
CLI Documentation
102+
---------------------
103+
* :ref:`torchtrtc`
104+
105+
.. toctree::
106+
:caption: CLI Documenation
107+
:maxdepth: 0
108+
:hidden:
109+
110+
cli/torchtrtc
111+
112+
102113
Contributor Documentation
103114
--------------------------------
104115
* :ref:`system_overview`

docsrc/py_api/fx.rst

Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
.. _torch_tensorrt_fx_py:
2+
3+
torch_tensorrt.fx
4+
===================
5+
6+
.. currentmodule:: torch_tensorrt.fx
7+
8+
.. automodule torch_tensorrt.ts
9+
:undoc-members:
10+
11+
.. automodule:: torch_tensorrt.fx
12+
:members:
13+
:undoc-members:
14+
:show-inheritance:
15+
16+
Functions
17+
------------
18+
19+
.. autofunction:: compile
20+
21+
22+
Classes
23+
--------
24+
25+
.. autoclass:: TRTModule
26+
27+
.. autoclass:: InputTensorSpec
28+
29+
.. autoclass:: TRTInterpreter
30+
31+
.. autoclass:: TRTInterpreterResult

docsrc/py_api/torch_tensorrt.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -57,3 +57,4 @@ Submodules
5757
logging
5858
ptq
5959
ts
60+
fx

docsrc/tutorials/getting_started_with_fx_path.rst

Lines changed: 15 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -1,42 +1,29 @@
1-
.. user_guide:
2-
Torch-TensorRT (FX Path) User Guide
3-
========================
4-
Torch-TensorRT (FX Path) is a tool that can convert a PyTorch model through torch.FX to an TensorRT engine optimized targeting running on Nvidia GPUs. TensorRT is the inference engine developed by Nvidia which composed of various kinds of optimization including kernel fusion, graph optimization, low precision, etc..
5-
This tool is developed in Python environment providing most usability to researchers and engineers. There are a few stages that a user want to use this tool and we will introduce them here.
6-
7-
8-
Installation
9-
------------
10-
Torch-TensorRT (FX Path) is in ``Beta`` phase and always recommended to work with PyTorch nightly.
1+
.. _getting_started_with_fx:
112

3+
Torch-TensorRT (FX Frontend) User Guide
4+
========================
5+
Torch-TensorRT (FX Frontend) is a tool that can convert a PyTorch model through ``torch.fx`` to an
6+
TensorRT engine optimized targeting running on Nvidia GPUs. TensorRT is the inference engine
7+
developed by NVIDIA which composed of various kinds of optimization including kernel fusion,
8+
graph optimization, low precision, etc.. This tool is developed in Python environment which allows this
9+
workflow to be very accessible to researchers and engineers. There are a few stages that a
10+
user want to use this tool and we will introduce them here.
1211

13-
* Method 1. Follow the instrucions for Torch-TensorRT
14-
* Method 2. To install FX path only (Python path) and avoid the C++ build for torchscript path
12+
> Torch-TensorRT (FX Frontend) is in ``Beta`` and currently it is recommended to work with PyTorch nightly.
1513

1614
.. code-block:: shell
1715
18-
$ conda create --name python_env python=3.8
19-
$ conda activate python_env
20-
21-
# Recommend to install PyTorch 1.12 and later
22-
$ conda install pytorch torchvision torchtext cudatoolkit=11.3 -c pytorch-nightly
23-
24-
# Install TensorRT python package
25-
$ pip3 install nvidia-pyindex
26-
$ pip3 install nvidia-tensorrt==8.2.4.2
27-
$ git clone https://github.com/pytorch/TensorRT.git
28-
$ cd TensorRT/py && python setup.py install --fx-only && cd ..
29-
30-
$ pyton -c "import torch_tensorrt.fx"
3116
# Test an example by
3217
$ python py/torch_tensorrt/fx/example/lower_example.py
3318
3419
3520
Converting a PyTorch Model to TensorRT Engine
3621
---------------------------------------------
37-
In general, users are welcome to use the ``compile()`` to finish the conversion from a model to tensorRT engine. It is a wrapper API that consists of the major steps needed to finish this converison. Please refer to ``lower_example.py`` file in ``examples/fx``.
22+
In general, users are welcome to use the ``compile()`` to finish the conversion from a model to tensorRT engine. It is a
23+
wrapper API that consists of the major steps needed to finish this converison. Please refer to ``lower_example.py`` file in ``examples/fx``.
3824

39-
In this section, we will go through an example to illustrate the major steps that FX path uses. Users can refer to ``fx2trt_example.py`` file in ``examples/fx``.
25+
In this section, we will go through an example to illustrate the major steps that fx path uses.
26+
Users can refer to ``fx2trt_example.py`` file in ``examples/fx``.
4027

4128
* **Step 1: Trace the model with acc_tracer**
4229
Acc_tracer is a tracer inheritated from FX tracer. It comes with args normalizer to convert all args to kwargs and pass to TRT converters.
@@ -276,7 +263,7 @@ In the custom mapper function, we construct an acc op node and return it. The no
276263
277264
The last step would be *adding unit test* for the new acc op or mapper function we added. The place to add the unit test is here `test_acc_tracer.py <https://github.com/pytorch/TensorRT/blob/master/py/torch_tensorrt/fx/test/tracer/test_acc_tracer.py>`_.
278265
279-
* **Step 2. Add a new fx2trt converter**
266+
* **Step 2. Add a new converter**
280267
281268
All the developed converters for acc ops are all in `acc_op_converter.py <https://github.com/pytorch/TensorRT/blob/master/py/torch_tensorrt/fx/converters/acc_ops_converters.py>`_. It could give you a good example of how the converter is added.
282269

0 commit comments

Comments
 (0)