Skip to content

Fix version extraction in Sphinx config #669

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 14 commits into from
Oct 27, 2022
Merged
3 changes: 1 addition & 2 deletions .github/workflows/build-sphinx.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,10 +11,9 @@ jobs:

steps:
- uses: actions/checkout@v1
- uses: ammaraskar/sphinx-action@master
- uses: jmduarte/sphinx-action@main
with:
docs-folder: "docs/"
pre-build-command: "pip install sphinx-rtd-theme numpy six pyyaml h5py 'onnx>=1.4.0' pandas seaborn matplotlib"
- name: Commit Documentation Changes
run: |
git clone https://github.com/fastmachinelearning/hls4ml.git --branch gh-pages --single-branch gh-pages
Expand Down
16 changes: 16 additions & 0 deletions .github/workflows/test-sphinx.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
name: test-sphinx
on:
pull_request:
branches:
- main

jobs:
build:

runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v1
- uses: jmduarte/sphinx-action@main
with:
docs-folder: "docs/"
10 changes: 2 additions & 8 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,13 +15,7 @@
sys.path.insert(0, os.path.abspath('../'))

import datetime
def get_version(rel_path):
for line in open(rel_path):
if line.startswith('__version__'):
delim = '"' if '"' in line else "'"
return line.split(delim)[1]
else:
raise RuntimeError("Unable to find version string.")
from setuptools_scm import get_version

# -- Project information -----------------------------------------------------

Expand All @@ -30,7 +24,7 @@ def get_version(rel_path):
author = 'Fast Machine Learning Lab'

# The full version, including alpha/beta/rc tags
release = get_version("../hls4ml/__init__.py")
release = get_version(root='..', relative_to=__file__)

# -- General configuration ---------------------------------------------------

Expand Down
22 changes: 12 additions & 10 deletions docs/release_notes.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,19 +8,21 @@ See `here <https://github.com/fastmachinelearning/hls4ml/releases>`__ for offici

**v0.6.0 / coris**

## What's Changed
* `VivadoAccelerator` backend: target `pynq-z2` and `zcu102` boards directly from hls4ml by @nicologhielmetti
* Updated `PyTorch` and `ONNX` converters by @Duchstf
* `line_buffer` Conv2D implementation for `io_stream`: reduced resource usage and latency by @Keb-L, @violatingcp, @vloncar
* Support `QConv2DBatchnorm` layer from `QKeras` by @nicologhielmetti
* Improved profiling plots - easier to compare original vs `hls4ml` converted models by @maksgraczyk
* Better derivation of data types for `QKeras` models by @jmduarte, @thesps
What's changed:

* ``VivadoAccelerator`` backend: target ``pynq-z2`` and ``zcu102`` boards directly from hls4ml by @nicologhielmetti
* Updated ``PyTorch`` and ``ONNX`` converters by @Duchstf
* ``line_buffer`` Conv2D implementation for ``io_stream``: reduced resource usage and latency by @Keb-L, @violatingcp, @vloncar
* Support ``QConv2DBatchnorm`` layer from ``QKeras`` by @nicologhielmetti
* Improved profiling plots - easier to compare original vs ``hls4ml`` converted models by @maksgraczyk
* Better derivation of data types for ``QKeras`` models by @jmduarte, @thesps
* Improved CI by @thesps
* More support for models with branches, skip connections, `Merge` and `Concatenate` layers by @jmduarte, @vloncar
* Support for `Dense` layers over multi-dimensional tensors by @vloncar
* More support for models with branches, skip connections, ``Merge`` and ``Concatenate`` layers by @jmduarte, @vloncar
* Support for ``Dense`` layers over multi-dimensional tensors by @vloncar
* Overall improvements by @vloncar, @jmduarte, @thesps, @jmitrevs & others

## New Contributors
New contributors:

* @siorpaes made their first contribution in https://github.com/fastmachinelearning/hls4ml/pull/424
* @jmitrevs made their first contribution in https://github.com/fastmachinelearning/hls4ml/pull/403
* @anders-wind made their first contribution in https://github.com/fastmachinelearning/hls4ml/pull/302
Expand Down
12 changes: 12 additions & 0 deletions docs/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
sphinx>=3.2.1
sphinx_rtd_theme
toposort>=1.5.0
numpy
six
pyyaml
h5py
onnx>=1.4.0
pandas
seaborn
matplotlib
setuptools_scm[toml]>=5
31 changes: 18 additions & 13 deletions hls4ml/writer/quartus_writer.py
Original file line number Diff line number Diff line change
Expand Up @@ -114,9 +114,11 @@ def write_project_cpp(self, model):
## myproject.cpp
###################

project_name = model.config.get_project_name()

filedir = os.path.dirname(os.path.abspath(__file__))
f = open(os.path.join(filedir, '../templates/quartus/firmware/myproject.cpp'), 'r')
fout = open('{}/firmware/{}.cpp'.format(model.config.get_output_dir(), model.config.get_project_name()), 'w')
fout = open('{}/firmware/{}.cpp'.format(model.config.get_output_dir(), project_name), 'w')

model_inputs = model.get_input_variables()
model_outputs = model.get_output_variables()
Expand All @@ -127,7 +129,7 @@ def write_project_cpp(self, model):
for line in f.readlines():
# Add headers to weights and biases
if 'myproject' in line:
newline = line.replace('myproject', model.config.get_project_name())
newline = line.replace('myproject', project_name)

# Intel HLS 'streams' need to be passed by reference to top-level entity or declared as global variables
# Streams cannot be declared inside a function
Expand All @@ -146,29 +148,29 @@ def write_project_cpp(self, model):
elif '//hls-fpga-machine-learning instantiate GCC top-level' in line:
newline = line
if io_type == 'io_stream':
newline += 'void myproject(\n'
newline += f'void {project_name}(\n'
for inp in model_inputs:
newline += indent+'stream_in<{}> &{}_stream,\n'.format(inp.type.name, inp.name)
for out in model_outputs:
newline += indent+'stream_out<{}> &{}_stream\n'.format(out.type.name, out.name)
newline += ') {\n'
if io_type == 'io_parallel':
newline = 'output_data myproject(\n'
newline = f'output_data {project_name}(\n'
newline+=indent+'input_data inputs\n'
newline+=') {\n'

# Instantiate HLS top-level function, to be used during HLS synthesis
elif '//hls-fpga-machine-learning instantiate HLS top-level' in line:
newline = line
if io_type == 'io_stream':
newline += 'component void myproject(\n'
newline += f'component void {project_name}(\n'
for inp in model_inputs:
newline += indent+'stream_in<{}> &{}_stream,\n'.format(inp.type.name, inp.name)
for out in model_outputs:
newline += indent+'stream_out<{}> &{}_stream\n'.format(out.type.name, out.name)
newline += ') {\n'
if io_type == 'io_parallel':
newline += 'component output_data myproject(\n'
newline += f'component output_data {project_name}(\n'
newline += indent+'input_data inputs\n'
newline += ') {\n'

Expand Down Expand Up @@ -263,9 +265,11 @@ def write_project_header(self, model):
## myproject.h
#######################

project_name = model.config.get_project_name()

filedir = os.path.dirname(os.path.abspath(__file__))
f = open(os.path.join(filedir, '../templates/quartus/firmware/myproject.h'), 'r')
fout = open('{}/firmware/{}.h'.format(model.config.get_output_dir(), model.config.get_project_name()), 'w')
fout = open('{}/firmware/{}.h'.format(model.config.get_output_dir(), project_name), 'w')

model_inputs = model.get_input_variables()
model_outputs = model.get_output_variables()
Expand All @@ -276,39 +280,40 @@ def write_project_header(self, model):

for line in f.readlines():
if 'MYPROJECT' in line:
newline = line.replace('MYPROJECT', format(model.config.get_project_name().upper()))
newline = line.replace('MYPROJECT', format(project_name.upper()))

elif 'myproject' in line:
newline = line.replace('myproject', model.config.get_project_name())
newline = line.replace('myproject', project_name)

elif '//hls-fpga-machine-learning instantiate GCC top-level' in line:
newline = line
# For io_stream, input and output are passed by reference; see myproject.h & myproject.cpp for more details

if io_type == 'io_stream':
newline += 'void myproject(\n'
newline += f'void {project_name}(\n'
for inp in model_inputs:
newline += indent+'stream_in<{}> &{}_stream,\n'.format(inp.type.name, inp.name)
for out in model_outputs:
newline += indent+'stream_out<{}> &{}_stream\n'.format(out.type.name, out.name)
newline += ');\n'
# In io_parallel, a struct is returned; see myproject.h & myproject.cpp for more details
else:
newline += 'output_data myproject(\n'
newline += f'output_data {project_name}(\n'
newline += indent+'input_data inputs\n'
newline += ');\n'

# Similar to GCC instantiation, but with the keyword 'component'
elif '//hls-fpga-machine-learning instantiate HLS top-level' in line:
newline = line
if io_type == 'io_stream':
newline += 'component void myproject(\n'
newline += f'component void {project_name}(\n'
for inp in model_inputs:
newline += indent+'stream_in<{}> &{}_stream,\n'.format(inp.type.name, inp.name)
for out in model_outputs:
newline += indent+'stream_out<{}> &{}_stream\n'.format(out.type.name, out.name)
newline += ');\n'
else:
newline += 'component output_data myproject(\n'
newline += f'component output_data {project_name}(\n'
newline += indent+'input_data inputs\n'
newline += ');\n'

Expand Down
19 changes: 9 additions & 10 deletions test/pytest/test_keras_h5_loader.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,6 @@

test_root_path = Path(__file__).parent

test_root_path = Path('/tmp')


@pytest.mark.parametrize('backend', ['Vivado', 'Quartus'])
def test_keras_h5_loader(backend):
Expand All @@ -20,14 +18,15 @@ def test_keras_h5_loader(backend):

hls_config = hls4ml.utils.config_from_keras_model(model, granularity='name')

config = {'OutputDir': 'KerasH5_loader_test',
'ProjectName': 'KerasH5_loader_test',
'Backend': backend,
'ClockPeriod': 25.0,
'IOType': 'io_parallel',
'HLSConfig': hls_config,
'KerasH5': str(test_root_path / 'KerasH5_loader_test.h5'),
'output_dir': str(test_root_path / 'KerasH5_loader_test')}
config = {
'OutputDir': str(test_root_path / f'hls4mlprj_KerasH5_loader_test_{backend}'),
'ProjectName': f'KerasH5_loader_test_{backend}',
'Backend': backend,
'ClockPeriod': 25.0,
'IOType': 'io_parallel',
'HLSConfig': hls_config,
'KerasH5': str(test_root_path / f'hls4mlprj_KerasH5_loader_test_{backend}/model.h5'),
}

model.save(config['KerasH5'])
hls_model = hls4ml.converters.keras_to_hls(config)
Expand Down
2 changes: 1 addition & 1 deletion test/pytest/test_softsign.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,4 +30,4 @@ def test_softsign(backend, input_shape, io_type):
acc_hls4ml = accuracy_score(np.argmax(y_keras, axis=-1).ravel(), np.argmax(y_hls4ml, axis=-1).ravel())

print('Accuracy hls4ml relative to keras: {}'.format(acc_hls4ml))
assert acc_hls4ml >= 0.97
assert acc_hls4ml >= 0.96