Skip to content

Commit 3d083b6

Browse files
committed
update pre-commit hooks
1 parent e4e529a commit 3d083b6

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

49 files changed

+84
-141
lines changed

.github/workflows/build-sphinx.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
name: build-sphinx
22
on:
33
push:
4-
branches:
4+
branches:
55
- main
66

77
jobs:

.github/workflows/pypi-publish.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
name: 📦 Packaging release to PyPI
2-
on:
2+
on:
33
release:
44
types: [released]
55

.github/workflows/test-sphinx.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
name: test-sphinx
22
on:
33
pull_request:
4-
branches:
4+
branches:
55
- main
66

77
jobs:
@@ -10,7 +10,7 @@ jobs:
1010
runs-on: ubuntu-latest
1111

1212
steps:
13-
- uses: actions/checkout@v1
13+
- uses: actions/checkout@v1
1414
- uses: jmduarte/sphinx-action@main
1515
with:
1616
docs-folder: "docs/"

.pre-commit-config.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@ exclude: (^hls4ml\/templates\/(vivado|quartus)\/(ap_types|ac_types)\/|^test/pyte
22

33
repos:
44
- repo: https://github.com/psf/black
5-
rev: 22.12.0
5+
rev: 23.3.0
66
hooks:
77
- id: black
88
language_version: python3
@@ -41,7 +41,7 @@ repos:
4141
- id: setup-cfg-fmt
4242

4343
- repo: https://github.com/pycqa/flake8
44-
rev: 5.0.4
44+
rev: 6.0.0
4545
hooks:
4646
- id: flake8
4747
exclude: docs/conf.py

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ Detailed tutorials on how to use `hls4ml`'s various functionalities can be found
2121
pip install hls4ml
2222
```
2323

24-
To install the extra dependencies for profiling:
24+
To install the extra dependencies for profiling:
2525

2626
```
2727
pip install hls4ml[profiling]

contrib/kl_layer/kl_layer.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,6 @@ def build(self, input_shape):
3434
super().build(input_shape)
3535

3636
def _merge_function(self, inputs):
37-
3837
mean = inputs[0]
3938
log_var = inputs[1]
4039

docs/api/configuration.rst

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -4,19 +4,19 @@ Configuration
44

55

66

7-
We currently support two ways of setting hls4ml's model configuration. This page documents both methods' usage.
7+
We currently support two ways of setting hls4ml's model configuration. This page documents both methods' usage.
88

99

10-
.. contents:: \
10+
.. contents:: \
1111

1212

13-
**NOTE:**
13+
**NOTE:**
1414

1515

16-
*
16+
*
1717
One important part of ``hls4ml`` to remember is that the user is responsible for the format of the inputs. There is no automatic formatting or normalization so this must be done in the training.
1818

19-
*
19+
*
2020
For developers, you might also want to checkout this section: `Detailed configuration in converted hls codes <#detailed-configuration-in-converted-hls-codes>`_.
2121

2222
----
@@ -73,26 +73,26 @@ It looks like this:
7373
Part: xcku115-flvb2104-2-i
7474
ClockPeriod: 5
7575
IOType: io_parallel # options: io_parallel/io_stream
76-
76+
7777
HLSConfig:
7878
Model:
7979
Precision: ap_fixed<16,6>
8080
ReuseFactor: 1
81-
Strategy: Latency
81+
Strategy: Latency
8282
LayerType:
8383
Dense:
8484
ReuseFactor: 2
8585
Strategy: Resource
8686
Compression: True
8787
88-
There are a number of configuration options that you have. Let's go through them. You have basic setup parameters:
88+
There are a number of configuration options that you have. Let's go through them. You have basic setup parameters:
8989

9090

9191
* **OutputDir**\ : the output directory where you want your HLS project to appear
9292
* **ProjectName**\ : the name of the HLS project IP that is produced
93-
* **KerasJson/KerasH5**\ : for Keras, the model architecture and weights are stored in a ``json`` and ``h5`` file. The path to those files are required here.
93+
* **KerasJson/KerasH5**\ : for Keras, the model architecture and weights are stored in a ``json`` and ``h5`` file. The path to those files are required here.
9494
We also support keras model's file obtained just from ``model.save()``. In this case you can just supply the ``h5`` file in ``KerasH5:`` field.
95-
* **InputData/OutputPredictions**\ : path to your input/predictions of the model. If none is supplied, then hls4ml will create aritificial data for simulation. The data used above in the example can be found `here <https://cernbox.cern.ch/index.php/s/2LTJVVwCYFfkg59>`__. We also support ``npy`` data files. We welcome suggestions on more input data types to support.
95+
* **InputData/OutputPredictions**\ : path to your input/predictions of the model. If none is supplied, then hls4ml will create aritificial data for simulation. The data used above in the example can be found `here <https://cernbox.cern.ch/index.php/s/2LTJVVwCYFfkg59>`__. We also support ``npy`` data files. We welcome suggestions on more input data types to support.
9696

9797
The backend-specific section of the configuration depends on the backend. You can get a starting point for the necessary settings using, for example `hls4ml.templates.get_backend('Vivado').create_initial_config()`.
9898
For Vivado backend the options are:
@@ -147,7 +147,7 @@ A specific layer can be targeted like this:
147147
ReuseFactor: 16
148148
LayerName:
149149
dense1:
150-
Precision:
150+
Precision:
151151
weight: ap_fixed<14,2>
152152
bias: ap_fixed<14,4>
153153
result: ap_fixed<16,6>

docs/api/hls-model.rst

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ After that, you can use several methods in that object. Here is a list of all th
2323
* :ref:`build <build-method>`
2424
* :ref:`trace <trace-method>`
2525

26-
Similar functionalities are also supported through command line interface. If you prefer using them, please refer to Command Help section.
26+
Similar functionalities are also supported through command line interface. If you prefer using them, please refer to Command Help section.
2727

2828
----
2929

@@ -67,7 +67,7 @@ Similar to ``keras``\ 's predict API, you can get the predictions of ``hls_model
6767
6868
y = hls_model.predict(X)
6969
70-
This is similar to doing ``csim`` simulation, but you can get your prediction results much faster. It's very helpful when you want to quickly prototype different configurations for your model.
70+
This is similar to doing ``csim`` simulation, but you can get your prediction results much faster. It's very helpful when you want to quickly prototype different configurations for your model.
7171

7272
----
7373

@@ -80,7 +80,7 @@ This is similar to doing ``csim`` simulation, but you can get your prediction re
8080
8181
hls_model.build()
8282
83-
#You can also read the report of the build
83+
#You can also read the report of the build
8484
hls4ml.report.read_vivado_report('hls4ml_prj')
8585
8686
----
@@ -92,7 +92,7 @@ This is similar to doing ``csim`` simulation, but you can get your prediction re
9292

9393
The trace method is an advanced version of the ``predict`` method. It's used to trace individual outputs from each layer of the hls_model. This is useful for debugging and setting the appropriate configuration.
9494

95-
**Return:** A dictionary where the keys are the names of the layers, and its values are the layers's outputs.
95+
**Return:** A dictionary where the keys are the names of the layers, and its values are the layers's outputs.
9696

9797
.. code-block:: python
9898

docs/command.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ This page documents all the commands that ``hls4ml`` supports.
99
Overview
1010
=========
1111

12-
To start you can just type in ``hls4ml -h`` or ``hls4ml --help`` in your command line, a message will show up like below:
12+
To start you can just type in ``hls4ml -h`` or ``hls4ml --help`` in your command line, a message will show up like below:
1313

1414
.. code-block::
1515

docs/concepts.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
Concepts
33
========
44

5-
The goal of ``hls4ml`` is to provide an efficient and fast translation of machine learning models from open-source packages (like Keras and PyTorch) for training machine learning algorithms to high level synthesis (HLS) code that can then be transpiled to run on an FPGA. The resulting HLS project can be then used to produce an IP which can be plugged into more complex designs or be used to create a kernel for CPU co-processing. The user has freedom to define many of the parameters of their algorithm to best suit their needs.
5+
The goal of ``hls4ml`` is to provide an efficient and fast translation of machine learning models from open-source packages (like Keras and PyTorch) for training machine learning algorithms to high level synthesis (HLS) code that can then be transpiled to run on an FPGA. The resulting HLS project can be then used to produce an IP which can be plugged into more complex designs or be used to create a kernel for CPU co-processing. The user has freedom to define many of the parameters of their algorithm to best suit their needs.
66

77
The ``hls4ml`` package enables fast prototyping of a machine learning algorithm implementation in FPGAs,
88
greatly reducing the time to results and giving the user intuition for how to best design a machine learning algorithm for their application while balancing performance, resource utilization and latency requirements.
@@ -35,7 +35,7 @@ Consider a multi-layered neural network. At each neuron in a layer :math:`m` (c
3535
3636
\mathbf{x}_m = g_m (W_{m,m-1} \mathbf{x}_{m-1} +\mathbf{b}_m)
3737
38-
With hls4ml, each layer of output values is calculated independently in sequence, using pipelining to speed up the process by accepting new inputs after an initiation interval. The activations, if nontrivial, are precomputed.
38+
With hls4ml, each layer of output values is calculated independently in sequence, using pipelining to speed up the process by accepting new inputs after an initiation interval. The activations, if nontrivial, are precomputed.
3939

4040
To ensure optimal performance, the user can control aspects of their model, principally:
4141

0 commit comments

Comments
 (0)