Skip to content

[IR] Improve documentation 1/n #2227

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Apr 25, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -100,6 +100,7 @@ dmypy.json
*.onnxlib
**/onnx_backend_test_code/**
docs/auto_examples/*
docs/intermediate_representation/generated/*
tests/export/*
tests/models/testoutputs/*
tests/mylib.onnxlib
Expand Down
14 changes: 14 additions & 0 deletions docs/_templates/classtemplate.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
.. role:: hidden
:class: hidden-section
.. currentmodule:: {{ module }}


{{ name | underline}}

.. autoclass:: {{ name }}
:members:


..
autogenerated from docs/_templates/classtemplate.rst
note it does not have :inherited-members:
15 changes: 15 additions & 0 deletions docs/intermediate_representation/index.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,24 @@
# ONNX IR

An in-memory IR that supports the full ONNX spec, designed for graph construction, analysis and transformation.

## Features ✨

- Full ONNX spec support: all valid models representable by ONNX protobuf, and a subset of invalid models (so you can load and fix them).
- Low memory footprint: mmap'ed external tensors; unified interface for ONNX TensorProto, Numpy arrays and PyTorch Tensors etc. No tensor size limitation. Zero copies.
- Straightforward access patterns: Access value information and traverse the graph topology at ease.
- Robust mutation: Create as many iterators as you like on the graph while mutating it.
- Speed: Performant graph manipulation, serialization/deserialization to Protobuf.
- Pythonic and familiar APIs: Classes define Pythonic apis and still map to ONNX protobuf concepts in an intuitive way.
- No protobuf dependency: The IR does not require protobuf once the model is converted to the IR representation, decoupling from the serialization format.

## Get started

```{toctree}
:maxdepth: 1
getting_started
tensors
ir_api
generated
```
45 changes: 41 additions & 4 deletions docs/intermediate_representation/ir_api.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,46 @@
# onnxscript.ir

<!-- TODO: Organize the orders and add tutorial -->
```{eval-rst}
.. automodule::onnxscript.ir
```

## IR objects

```{eval-rst}
.. automodule:: onnxscript.ir
:members:
:undoc-members:
.. currentmodule:: onnxscript
.. autosummary::
:toctree: generated
:nosignatures:
:template: classtemplate.rst
ir.Model
ir.Graph
ir.GraphView
ir.Function
ir.Node
ir.Value
ir.Attr
ir.RefAttr
ir.Shape
ir.SymbolicDim
ir.TypeAndShape
ir.TensorType
ir.SparseTensorType
ir.SequenceType
ir.OptionalType
ir.Tensor
ir.ExternalTensor
ir.StringTensor
```

## Enums

```{eval-rst}
.. autosummary::
:toctree: generated
:nosignatures:
:template: classtemplate.rst
ir.DataType
ir.AttributeType
```
2 changes: 1 addition & 1 deletion docs/intermediate_representation/tensors.md
Original file line number Diff line number Diff line change
Expand Up @@ -167,7 +167,7 @@ The following example shows how to create a `FLOAT8E4M3FN` tensor, transform its
print("tensor.numpy():", tensor.numpy()) # [0.00195312 0.00585938]
# Compute
times_100 = tensor.numpy() * 100
times_100 = tensor.numpy() * np.array(100, dtype=tensor.numpy().dtype)
print("times_100:", times_100)
# Create a new tensor out of the new value; dtype must be specified
Expand Down
Loading