Skip to content

Further update documentation for 0.7.0 #744

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 11 commits into from
Apr 14, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
33 changes: 33 additions & 0 deletions CITATION.cff
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ type: software
authors:
- given-names: "FastML Team"
title: "hls4ml"
version: "v0.7.0rc1"
doi: 10.5281/zenodo.1201549
repository-code: "https://github.com/fastmachinelearning/hls4ml"
url: "https://fastmachinelearning.org/hls4ml"
Expand All @@ -21,3 +22,35 @@ abstract: |
hls4ml is an open-source software-hardware codesign workflow
to interpret and translate machine learning algorithms for
implementations in hardware, including FPGAs and ASICs.
references:
- type: article
title: "Fast inference of deep neural networks on FPGAs with hls4ml"
authors:
- family-names: "Duarte"
given-names: "Javier"
- family-names: "Han"
given-names: "Song"
- family-names: "Harris"
given-names: "Philip"
- family-names: "Jindariani"
given-names: "Sergo"
- family-names: "Kreinar"
given-names: "Edward"
- family-names: "Kreis"
given-names: "Benjamin"
- family-names: "Ngadiuba"
given-names: "Jennifer"
- family-names: "Pierini"
given-names: "Maurizio"
- family-names: "Rivera"
given-names: "Ryan"
- family-names: "Tran"
given-names: "Nhan"
- family-names: "Wu"
given-names: "Zhenbin"
journal: "JINST"
volume: "13"
start: "P07027"
doi: "10.1088/1748-0221/13/07/P07027"
year: "2018"
number: "07"
5 changes: 3 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,11 +64,12 @@ hls4ml.report.read_vivado_report('my-hls-test')
# Citation
If you use this software in a publication, please cite the software
```bibtex
@software{vloncar_2021_5680908,
@software{fastml_hls4ml,
author = {{FastML Team}},
title = {fastmachinelearning/hls4ml},
year = 2021,
year = 2023,
publisher = {Zenodo},
version = {v0.7.0rc1},
doi = {10.5281/zenodo.1201549},
url = {https://github.com/fastmachinelearning/hls4ml}
}
Expand Down
77 changes: 77 additions & 0 deletions docs/advanced/accelerator.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
=========================
VivadoAccelerator Backend
=========================

The ``VivadoAccelerator`` backend of ``hls4ml`` leverages the `PYNQ <http://pynq.io/>`_ software stack to easily deploy models on supported devices.
Currently ``hls4ml`` supports the following boards:

* `pynq-z2 <https://www.xilinx.com/support/university/xup-boards/XUPPYNQ-Z2.html>`_ (part: ``xc7z020clg400-1``)
* `zcu102 <https://www.xilinx.com/products/boards-and-kits/ek-u1-zcu102-g.html>`_ (part: ``xczu9eg-ffvb1156-2-e``)
* `alveo-u50 <https://www.xilinx.com/products/boards-and-kits/alveo/u50.html>`_ (part: ``xcu50-fsvh2104-2-e``)
* `alveo-u250 <https://www.xilinx.com/products/boards-and-kits/alveo/u250.html>`_ (part: ``xcu250-figd2104-2L-e``)
* `alveo-u200 <https://www.xilinx.com/products/boards-and-kits/alveo/u200.html>`_ (part: ``xcu200-fsgd2104-2-e``)
* `alveo-u280 <https://www.xilinx.com/products/boards-and-kits/alveo/u280.html>`_ (part: ``xcu280-fsvh2892-2L-e``)

but, in principle, support can be extended to `any board supported by PYNQ <http://www.pynq.io/board.html>`_.
For the Zynq-based boards, there are two components: an ARM-based processing system (PS) and FPGA-based programmable logic (PL), with various intefaces between the two.

.. image:: ../img/zynq_interfaces.png
:height: 300px
:align: center
:alt: Zynq PL/PS interfaces

Neural Network Overlay
======================

In the PYNQ project, programmable logic circuits are presented as hardware libraries called *overlays*.
The overlay can be accessed through a Python API.
In ``hls4ml``, we create a custom **neural network overlay**, which sends and receives data via AXI stream.
The target device is programmed using a bitfile that is generated by the ``VivadoAccelerator`` backend.

.. image:: ../img/pynqframe.png
:width: 600px
:align: center
:alt: PYNQ software stack

Example
=======

This example is taken from `part 7 of the hls4ml tutorial <https://github.com/fastmachinelearning/hls4ml-tutorial/blob/master/part7_deployment.ipynb>`_.
Specifically, we'll deploy a model on a ``pynq-z2`` board.

First, we generate the bitfile from a Keras model ``model`` and a config.

.. code-block:: Python

import hls4ml
config = hls4ml.utils.config_from_keras_model(model, granularity='name')
hls_model = hls4ml.converters.convert_from_keras_model(model,
hls_config=config,
output_dir='hls4ml_prj_pynq',
backend='VivadoAccelerator',
board='pynq-z2')
hls4ml.build(bitfile=True)


After this command completes, we will need to package up the bitfile, hardware handoff, and Python driver to copy to the PS of the board.

.. code-block:: bash

mkdir -p package
cp hls4ml_prj_pynq/myproject_vivado_accelerator/project_1.runs/impl_1/design_1_wrapper.bit package/hls4ml_nn.bit
cp hls4ml_prj_pynq/myproject_vivado_accelerator/project_1.srcs/sources_1/bd/design_1/hw_handoff/design_1.hwh package/hls4ml_nn.hwh
cp hls4ml_prj_pynq/axi_stream_driver.py package/
tar -czvf package.tar.gz -C package/ .

Then we can copy this package to the PS of the board and untar it.

Finally, on the PS in Python we can create a ``NeuralNetworkOverlay`` object, which will download the bitfile onto the PL of the board.
We also must provide the shapes of our input and output data, ``X_test.shape`` and ``y_test.shape``, respectively, to allocate the buffers for the data transfer.
The ``predict`` method will send the input data to the PL and return the output data ``y_hw``.

.. code-block:: Python

from axi_stream_driver import NeuralNetworkOverlay

nn = NeuralNetworkOverlay('hls4ml_nn.bit', X_test.shape, y_test.shape)
y_hw, latency, throughput = nn.predict(X_test, profile=True)
1 change: 1 addition & 0 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -124,3 +124,4 @@ def get_pypi_version(package, url_pattern=URL_PATTERN):
'github_version': 'main', # Version
'conf_py_path': '/docs/', # Path in the checkout to the docs root
}
html_favicon = 'img/hls4ml_logo.svg'
62 changes: 45 additions & 17 deletions docs/flows.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,32 +2,60 @@
Optimizer Passes and Flows
==========================

The ``hls4ml`` package internally represents the model graph with the :py:class:`~hls4ml.model.graph.ModelGraph` class.
The nodes in this graph are represented by classes derived from the :py:class:`~hls4ml.model.layers.Layer` base class.
The ``hls4ml`` library will parse models from Keras, PyTorch or ONNX into an internal execution graph. This model graph is represented with the
:py:class:`~hls4ml.model.graph.ModelGraph` class. The nodes in this graph, corresponding to the layer and operations of the input model are represented
by classes derived from the :py:class:`~hls4ml.model.layers.Layer` base class.

Layers have only inputs, outputs and attributes.
All information about the layer's state and configuration is stored in the attributes.
All weights, variables and data types are attributes and there are mapping views to sort through them.
Layers are required to have defined inputs and outputs that define how they are connected in the graph and what is the shape of their output. All information
about the layer's state and configuration is stored in its attributes. All weights, variables and data types are attributes and there are mapping views to sort through them.
Layers can define expected attributes and can be verified for correctness, or to produce a list of configurable attributes that user can tweak.

Optimizer passes
----------------

An :py:class:`~hls4ml.model.optimizer.optimizer.OptimizerPass` transforms a model graph.
All model/layer transformations should happen in these optimizer passes.
There are a number of types of optimizer passes:
To reach a state from which the code can be generated, internal model graph will undergo a series of optimizations (transformations), dubbed *optimization passes*.
All transformations of the model and any modification to any layer's attributes must be implemented through an optimization pass. All optimizer passes derive from
the :py:class:`~hls4ml.model.optimizer.optimizer.OptimizerPass` class. Optimizer passes are applied at the level of nodes/layers, however a special class
:py:class:`~hls4ml.model.optimizer.optimizer.ModelOptimizerPass` exists that is applied on the full model. Subclasses of
:py:class:`~hls4ml.model.optimizer.optimizer.OptimizerPass` must provide a criteria in ``match`` function that, if satisfied, will perform the transformation from
``transform`` function. The boolean return value of ``transform`` indicates if the optimizer pass made changes to the model graph, requiring running the optimizers again.
Example of an optimizer pass that runs on the full model, is :py:class:`~hls4ml.model.optimizer.passes.stamp.MakeStamp`, while an example of the layer optimizer is
:py:class:`~hls4ml.model.optimizer.passes.fuse_biasadd` class that adds a bias to a :py:class:`~hls4ml.model.layers.Dense`,
:py:class:`~hls4ml.model.layers.Conv1D`, or :py:class:`~hls4ml.model.layers.Conv2D` layer.

* layer-specific: These are special optimizations for a given layer.
An example is the :py:class:`~hls4ml.model.optimizer.passes.fuse_biasadd` class that adds a bias to a :py:class:`~hls4ml.model.layers.Dense`, :py:class:`~hls4ml.model.layers.Conv1D`, or :py:class:`~hls4ml.model.layers.Conv2D` layer.
* backend-specific: These are only used for particular backends. An example is :py:class:`~hls4ml.backends.vivado.passes.repack_stream.ReshapeStream`.
* model-level: These model-level optimizer passes are run on every type of layer.
* templates: These add the HLS code for a particular backend, e.g., :py:class:`~hls4ml.backends.vivado.passes.core_templates.DenseFunctionTemplate`.
* decorators
Optimizers can be general, independent of the backend, in which case they are located in :py:mod:`hls4ml.model.optimizer.passes`, or they may be backend-specific,
in which case they are located in a folder dependent on the backend, e.g., :py:mod:`hls4ml.backends.vivado.passes` or
:py:mod:`hls4ml.backends.quartus.passes`. A common set of optimizers that are used by FPGA backends are located in :py:mod:`hls4ml.backends.fpga.passes`.

Certain optimizers are used frequently enough that it makes sense to define special classes, which inherit from :py:class:`~hls4ml.model.optimizer.optimizer.OptimizerPass`

* :py:class:`~hls4ml.model.optimizer.optimizer.GlobalOptimizerPass`: An optimizer pass that matches each node. This is useful, for example,
to transform the types for a particular backend.
* :py:class:`~hls4ml.model.optimizer.optimizer.LayerOptimizerPass`: An optimizer pass that matches each node of a particular layer type. This is
useful, for example, to write out the HLS code for a particular node that remains in the final graph.
* :py:class:`~hls4ml.model.optimizer.optimizer.ConfigurableOptimizerPass`: An optimizer pass that has some configurable parameters.
* :py:class:`~hls4ml.backends.template.Template`: An optimizer pass that populates a code template and assigns it to an attribute of a given layer. This is commonly used
to generate code blocks in later stages of the conversion.

Note that :py:class:`~hls4ml.model.optimizer.optimizer.LayerOptimizerPass` and :py:class:`~hls4ml.model.optimizer.optimizer.ModelOptimizerPass`
also exist as decorators that wrap a function.

New optimizers can be registered with the :py:func:`~hls4ml.model.optimizer.optimizer.register_pass`. Optimizers should be assigned to a flow (see below).

Flows
-----
A :py:class:`~hls4ml.model.flow.flow.Flow` is an ordered set of optimizers that may depend on other flows.
A :py:class:`~hls4ml.model.flow.flow.Flow` is an ordered set of optimizers that represent a single stage in the conversion process. The optimizers from a flow are applied
until they no longer make changes to the model graph after which the next flow (stage) can start. Flows may depend on other flows being applied before them,
ensuring the model graph is in a desired state before a flow starts. The function :py:func:`~hls4ml.model.flow.flow.register_flow` is used to register a new flow. Flows
are applied on a model graph with :py:func:`~hls4ml.model.graph.ModelGraph.apply_flow`.

There are common model-level flows that can run regardless of the backend, and there are backend-specific flows.
Each backend provides provides a default flow for processing.
For example, the Vivado backend defaults to an `IP flow <https://github.com/fastmachinelearning/hls4ml/blob/7c0a065935904f50bd7e4c547f85354b36276092/hls4ml/backends/vivado/vivado_backend.py#L148-L160>`_ that requires additional flows and produces an IP.
The `convert and optimize <https://github.com/fastmachinelearning/hls4ml/blob/7c0a065935904f50bd7e4c547f85354b36276092/hls4ml/model/optimizer/__init__.py#L14-L20>`_
flows do not depend on a backend.

Each backend provides provides a default flow that defines the default target for that backend. For example, the Vivado backend defaults to an
`IP flow <https://github.com/fastmachinelearning/hls4ml/blob/7c0a065935904f50bd7e4c547f85354b36276092/hls4ml/backends/vivado/vivado_backend.py#L148-L160>`_
that requires additional flows and produces an IP. It runs no optimizers itself, but it requires that many other flows (sub-flows) to have run.
The convert and optimize flows defined above are some of these required sub-flows.

Another example is FIFO buffer depth optimization explained in the :ref:`FIFO Buffer Depth Optimization` section.
Binary file added docs/img/pynqframe.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/zynq_interfaces.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
5 changes: 3 additions & 2 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@
:hidden:
:caption: Introduction

release_notes
concepts
status
setup
release_notes
command
concepts
details
flows
reference
Expand All @@ -24,6 +24,7 @@

advanced/fifo_depth
advanced/extension
advanced/accelerator

.. toctree::
:hidden:
Expand Down
5 changes: 3 additions & 2 deletions docs/reference.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,11 +9,12 @@ If you use this software in a publication, please cite the software

.. code-block:: bibtex

@software{vloncar_2021_5680908,
@software{fastml_hls4ml,
author = {{FastML Team}},
title = {fastmachinelearning/hls4ml},
year = 2021,
year = 2023,
publisher = {Zenodo},
version = {v0.7.0rc1},
doi = {10.5281/zenodo.1201549},
url = {https://github.com/fastmachinelearning/hls4ml}
}
Expand Down
Loading