Skip to content

Update documentation for v0.7.0 release #710

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 41 commits into from
Apr 1, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
41 commits
Select commit Hold shift + click to select a range
2a139c3
start updating docs
jmduarte Feb 11, 2023
19091dd
update
jmduarte Feb 11, 2023
c292707
Add
jmduarte Feb 11, 2023
f343f27
test
jmduarte Feb 11, 2023
410c793
update
jmduarte Feb 11, 2023
5bd8c9c
update
jmduarte Feb 11, 2023
54fb938
add sphinx_contributors to requirements
jmduarte Feb 11, 2023
91de843
update
jmduarte Feb 11, 2023
74e89b5
update
jmduarte Feb 12, 2023
87f9c68
update
jmduarte Feb 12, 2023
9aca6dc
update
jmduarte Feb 14, 2023
50dc256
update
jmduarte Feb 14, 2023
1fed143
update
jmduarte Feb 14, 2023
c9ddbdb
update
jmduarte Feb 14, 2023
f1f72de
fix docstring issues dicovered when doing sphinx build
jmitrevs Mar 8, 2023
bff7cae
pre-commit fixes
jmitrevs Mar 8, 2023
cd0b62b
add deprecation
jmitrevs Mar 9, 2023
e0160f2
fix docstring spelling errors
jmitrevs Mar 9, 2023
b15adc5
Add some graph and layers docstrings, add more documentation
jmitrevs Mar 10, 2023
f57d5fc
Merge branch 'main' into docs
jmitrevs Mar 10, 2023
76f438a
pre-commit fix
jmitrevs Mar 10, 2023
74c78e4
function doesn't exist
jmduarte Mar 18, 2023
5be16b9
docs
jmduarte Mar 18, 2023
ff48663
fix Dense layer import
jmduarte Mar 19, 2023
76f5ae6
Update README.md
jmduarte Mar 23, 2023
5006266
Merge branch 'main' into docs
jmitrevs Mar 30, 2023
deeada8
add more documentation
jmitrevs Mar 30, 2023
11fbf99
update docs
jmduarte Mar 31, 2023
67a4b2d
pre-commit
jmduarte Mar 31, 2023
6c3d8c2
export GITHUB_TOKEN
jmduarte Mar 31, 2023
e70b257
try again
jmduarte Mar 31, 2023
f59c40f
update release vs version
jmduarte Mar 31, 2023
41fcb65
remove more layers in documentation
jmitrevs Mar 31, 2023
17c04e6
fix version
jmduarte Mar 31, 2023
896cb18
try .
jmduarte Mar 31, 2023
34d4506
fetch depth
jmduarte Mar 31, 2023
68ff412
API docs for attributes
vloncar Mar 31, 2023
2ea908a
try checking out head
jmduarte Mar 31, 2023
a41fb38
API docs for types
vloncar Mar 31, 2023
9417782
link to vivado ip flow code
jmduarte Mar 31, 2023
f952bc8
Merge branch 'main' into docs
jmitrevs Mar 31, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 5 additions & 2 deletions .github/workflows/build-sphinx.yml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
name: build-sphinx
on:
push:
branches:
branches:
- main

jobs:
Expand All @@ -10,7 +10,10 @@ jobs:
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v1
- uses: actions/checkout@v3
with:
fetch-depth: 0
ref: ${{ github.event.pull_request.head.sha }}
- uses: jmduarte/sphinx-action@main
with:
docs-folder: "docs/"
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/pypi-publish.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
name: 📦 Packaging release to PyPI
on:
on:
release:
types: [released]

Expand All @@ -9,7 +9,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout source
uses: actions/checkout@v2
uses: actions/checkout@v3
- uses: actions/setup-python@v2
with:
python-version: '3.x' # Version range or exact version of a Python version to use, using SemVer's version range syntax
Expand Down
7 changes: 5 additions & 2 deletions .github/workflows/test-sphinx.yml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
name: test-sphinx
on:
pull_request:
branches:
branches:
- main

jobs:
Expand All @@ -10,7 +10,10 @@ jobs:
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v1
- uses: actions/checkout@v3
with:
fetch-depth: 0
ref: ${{ github.event.pull_request.head.sha }}
- uses: jmduarte/sphinx-action@main
with:
docs-folder: "docs/"
28 changes: 16 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,13 @@
<p float="left">
<img src="https://fastmachinelearning.github.io/hls4ml/img/logo.jpg" alt="hls4ml" width="400"/>
<img src="https://github.com/fastmachinelearning/fastmachinelearning.github.io/raw/master/images/hls4ml_logo.svg" alt="hls4ml" width="400"/>
</p>

[![DOI](https://zenodo.org/badge/108329371.svg)](https://zenodo.org/badge/latestdoi/108329371)
[![License](https://img.shields.io/badge/License-Apache_2.0-red.svg)](https://opensource.org/licenses/Apache-2.0)
[![Documentation Status](https://github.com/fastmachinelearning/hls4ml/actions/workflows/build-sphinx.yml/badge.svg)](https://fastmachinelearning.org/hls4ml)
[![PyPI version](https://badge.fury.io/py/hls4ml.svg)](https://badge.fury.io/py/hls4ml)
[![Supported Python versions](https://img.shields.io/pypi/pyversions/hls4ml.svg)](https://pypi.org/project/hls4ml/)
[![Downloads](https://static.pepy.tech/personalized-badge/hls4ml?period=total&units=international_system&left_color=grey&right_color=orange&left_text=Downloads)](https://pepy.tech/project/hls4ml)
<a href="https://anaconda.org/conda-forge/hls4ml/"><img alt="conda-forge" src="https://img.shields.io/conda/dn/conda-forge/hls4ml.svg?label=conda-forge"></a>

A package for machine learning inference in FPGAs. We create firmware implementations of machine learning algorithms using high level synthesis language (HLS). We translate traditional open-source machine learning package models into HLS that can be configured for your use-case!

Expand All @@ -17,13 +20,13 @@ For more information visit the webpage: [https://fastmachinelearning.org/hls4ml/
Detailed tutorials on how to use `hls4ml`'s various functionalities can be found [here](https://github.com/hls-fpga-machine-learning/hls4ml-tutorial).

# Installation
```
```bash
pip install hls4ml
```

To install the extra dependencies for profiling:
To install the extra dependencies for profiling:

```
```bash
pip install hls4ml[profiling]
```

Expand All @@ -32,13 +35,14 @@ pip install hls4ml[profiling]
```Python
import hls4ml

#Fetch a keras model from our example repository
#This will download our example model to your working directory and return an example configuration file
# Fetch a keras model from our example repository
# This will download our example model to your working directory and return an example configuration file
config = hls4ml.utils.fetch_example_model('KERAS_3layer.json')

print(config) #You can print the configuration to see some default parameters
# You can print the configuration to see some default parameters
print(config)

#Convert it to a hls project
# Convert it to a hls project
hls_model = hls4ml.converters.keras_to_hls(config)

# Print full list of example models if you want to explore more
Expand All @@ -49,11 +53,11 @@ hls4ml.utils.fetch_example_list()
Note: Vitis HLS is not yet supported. Vivado HLS versions between 2018.2 and 2020.1 are recommended.

```Python
#Use Vivado HLS to synthesize the model
#This might take several minutes
# Use Vivado HLS to synthesize the model
# This might take several minutes
hls_model.build()

#Print out the report if you want
# Print out the report if you want
hls4ml.report.read_vivado_report('my-hls-test')
```

Expand Down
185 changes: 185 additions & 0 deletions docs/advanced/extension.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,185 @@
========================
Extension API
========================

``hls4ml`` natively supports a large number of neural network layers.
But what if a desired layer is not supported?
If it is standard enough and its implementation would benefit the community as a whole, we would welcome a contribution to add it to the standard set of supported layers.
However, if it is a somewhat niche custom layer, there is another approach we can take to extend hls4ml through the *extension API*.

This documentation will walk through a complete `complete end-to-end example <https://github.com/fastmachinelearning/hls4ml/blob/main/test/pytest/test_extensions.py>`_, which is part of our testing suite.
To implement a custom layer in ``hls4ml`` with the extension API, the required components are:

* Your custom layer class
* Equivalent hls4ml custom layer class
* Parser for the converter
* HLS implementation
* Layer config template
* Function config template
* Registration of layer, source code, and templates

Complete example
================

For concreteness, let's say our custom layer ``KReverse`` is implemented in Keras and reverses the order of the last dimension of the input.

.. code-block:: Python

# Keras implementation of a custom layer
class KReverse(tf.keras.layers.Layer):
'''Keras implementation of a hypothetical custom layer'''

def __init__(self):
super().__init__()

def call(self, inputs):
return tf.reverse(inputs, axis=[-1])

We can define the equivalent layer in hls4ml ``HReverse``, which inherits from ``hls4ml.model.layers.Layer``.

.. code-block:: Python

# hls4ml layer implementation
class HReverse(hls4ml.model.layers.Layer):
'''hls4ml implementation of a hypothetical custom layer'''

def initialize(self):
inp = self.get_input_variable()
shape = inp.shape
dims = inp.dim_names
self.add_output_variable(shape, dims)

A parser for the Keras to HLS converter is also required.
This parser reads the attributes of the Keras layer instance and populates a dictionary of attributes for the hls4ml layer.
It also returns a list of output shapes (one sjape for each output).
In this case, there a single output with the same shape as the input.

.. code-block:: Python

# Parser for converter
def parse_reverse_layer(keras_layer, input_names, input_shapes, data_reader):
layer = {}
layer['class_name'] = 'HReverse'
layer['name'] = keras_layer['config']['name']
layer['n_in'] = input_shapes[0][1]

if input_names is not None:
layer['inputs'] = input_names

return layer, [shape for shape in input_shapes[0]]

Next, we need the actual HLS implementaton of the function, which can be written in a header file ``nnet_reverse.h``.

.. code-block:: C++

#ifndef NNET_REVERSE_H_
#define NNET_REVERSE_H_

#include "nnet_common.h"

namespace nnet {

struct reverse_config {
static const unsigned n_in = 10;
};

template<class data_T, typename CONFIG_T>
void reverse(
data_T input[CONFIG_T::n_in],
data_T reversed[CONFIG_T::n_in]
) {
for (int i = 0; i < CONFIG_T::n_in; i++) {
reversed[CONFIG_T::n_in - 1 - i] = input[i];
}
}

}

#endif

Now, we can define the layer config and function call templates.
These two templates determine how to populate the config template based on the layer attributes and the function call signature for the layer in HLS, respectively.

.. code-block:: Python

rev_config_template = """struct config{index} : nnet::reverse_config {{
static const unsigned n_in = {n_in};
}};\n"""

rev_function_template = 'nnet::reverse<{input_t}, {config}>({input}, {output});'
rev_include_list = ['nnet_utils/nnet_reverse.h']


class HReverseConfigTemplate(hls4ml.backends.template.LayerConfigTemplate):
def __init__(self):
super().__init__(HReverse)
self.template = rev_config_template

def format(self, node):
params = self._default_config_params(node)
return self.template.format(**params)


class HReverseFunctionTemplate(hls4ml.backends.template.FunctionCallTemplate):
def __init__(self):
super().__init__(HReverse, include_header=rev_include_list)
self.template = rev_function_template

def format(self, node):
params = self._default_function_params(node)
return self.template.format(**params)

Now, we need to tell hls4ml about the existence of this new layer by registering it.
We also need to register the parser (a.k.a. the layer handler), the template passes, and HLS implementation source code with the particular backend.
In this case, the HLS code is valid for both the Vivado and Quartus backends.

.. code-block:: Python

# Register the converter for custom Keras layer
hls4ml.converters.register_keras_layer_handler('KReverse', parse_reverse_layer)

# Register the hls4ml's IR layer
hls4ml.model.layers.register_layer('HReverse', HReverse)

for backend_id in ['Vivado', 'Quartus']:
# Register the optimization passes (if any)
backend = hls4ml.backends.get_backend(backend_id)
backend.register_pass('remove_duplicate_reverse', RemoveDuplicateReverse, flow=f'{backend_id.lower()}:optimize')

# Register template passes for the given backend
backend.register_template(HReverseConfigTemplate)
backend.register_template(HReverseFunctionTemplate)

# Register HLS implementation
backend.register_source('nnet_reverse.h')

Finally, we can actually test the ``hls4ml`` custom layer compared to the Keras one.

.. code-block:: Python

# Test if it works
kmodel = tf.keras.models.Sequential(
[
tf.keras.layers.Input(shape=(8,)),
KReverse(),
tf.keras.layers.ReLU(),
]
)

x = np.random.randint(-5, 5, (8,), dtype='int32')
kres = kmodel(x)

for backend_id in ['Vivado', 'Quartus']:

hmodel = hls4ml.converters.convert_from_keras_model(
kmodel,
output_dir=str(f'hls4mlprj_extensions_{backend_id}'),
backend=backend_id,
io_type='io_parallel',
hls_config={'Model': {'Precision': 'ap_int<6>', 'ReuseFactor': 1}},
)

hmodel.compile()
hres = hmodel.predict(x.astype('float32'))

np.testing.assert_array_equal(kres, hres)
49 changes: 49 additions & 0 deletions docs/advanced/fifo_depth.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
==============================
FIFO Buffer Depth Optimization
==============================

With the ``io_stream`` IO type, each layer is connected with the subsequent layer through first-in first-out (FIFO) buffers.
The implementation of the FIFO buffers contribute to the overall resource utilization of the design, impacting in particular the BRAM or LUT utilization.
Because the neural networks can have complex architectures generally, it is hard to know a priori the correct depth of each FIFO buffer.
By default ``hls4ml`` choses the most conservative possible depth for each FIFO buffer, which can result in a an unnecessary overutilization of resources.

In order to reduce the impact on the resources used for FIFO buffer implementation, an optimization has been developed in `#509 <https://github.com/fastmachinelearning/hls4ml/pull/509>`_ that correctly sizes the depth of the FIFO buffers by analyzing the RTL cosimulation.
We implemented this FIFO buffer resizing as a :py:class:`~hls4ml.backends.vivado.passes.fifo_depth_optimization` optimizer pass.
Through RTL simulation with large FIFO buffers (by default set to a depth of 100,000), we estimate the maximum occupation of each FIFO.
Once the maximum depth is determined, the optimizer pass sets the FIFO buffer depth to that value plus 1.

As an example, we show below how to use the optimizer pass, inspired by this `GitHub Gist <https://gist.github.com/nicologhielmetti/3a268be32755448920e9f7d5c78a76d8>`_.
First, we can define a simple neural network in Keras

.. code-block:: Python

from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential

model = Sequential()
model.add(Dense(64, input_shape=(16,), name='fc1', activation='relu')
model.add(Dense(32, name='fc2', activation='relu'))
model.add(Dense(32, name='fc3', activation='relu'))
model.add(Dense(5, name='fc3', activation='softmax'))

Then, we can convert the model, including the flow

.. code-block:: Python

import hls4ml

config = hls4ml.utils.config_from_keras_model(model, granularity='model')
config['Flows'] = ['vivado:fifo_depth_optimization']
hls4ml.model.optimizer.get_optimizer('vivado:fifo_depth_optimization').configure(profiling_fifo_depth=100_000)


hls_model = hls4ml.converters.convert_from_keras_model(model,
io_type='io_stream',
hls_config=config,
output_dir='hls4mlprj_fifo_depth_opt',
part='xc7z020clg400-1',
backend='Vivado')

hls_model.build(reset=False, csim=True, synth=True, cosim=True)

For more details and results, see `H. Borras et al., "Open-source FPGA-ML codesign for the MLPerf Tiny Benchmark" (2022) <https://arxiv.org/abs/2206.11791>`_.
11 changes: 6 additions & 5 deletions docs/command.rst
Original file line number Diff line number Diff line change
@@ -1,15 +1,16 @@
========================
Command Line Interface
========================
===================================
Command Line Interface (deprecated)
===================================

This page documents all the commands that ``hls4ml`` supports.
The command line interface to ``hls4ml`` has been deprecated. Users are advised to use the python API. This page
documents all the commands that ``hls4ml`` supports as a reference for those that have not migrated.

----

Overview
=========

To start you can just type in ``hls4ml -h`` or ``hls4ml --help`` in your command line, a message will show up like below:
To start you can just type in ``hls4ml -h`` or ``hls4ml --help`` in your command line, a message will show up like below:

.. code-block::

Expand Down
Loading