-
Notifications
You must be signed in to change notification settings - Fork 465
External weights vivado accelerator #646
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
5de33c6
3295432
932fec8
1c9929e
fc00d4b
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -604,14 +604,20 @@ def compile(self): | |
self._top_function_lib = ctypes.cdll.LoadLibrary(lib_name) | ||
|
||
def _get_top_function(self, x): | ||
|
||
io_type = self.config.get_config_value('IOType') | ||
interface = self.config.get_config_value('AcceleratorConfig')['Interface'] if self.config.get_config_value('AcceleratorConfig') else None | ||
config_weights = (io_type == 'io_stream') and (interface == 'axi_master') | ||
|
||
if self._top_function_lib is None: | ||
raise Exception('Model not compiled') | ||
if len(self.get_input_variables()) == 1: | ||
xlist = [x] | ||
else: | ||
xlist = x | ||
n_outputs = len(self.get_output_variables()) | ||
|
||
n_weights = len(self.get_weight_variables()) | ||
|
||
for xi in xlist: | ||
if not isinstance(xi, np.ndarray): | ||
raise Exception('Expected numpy.ndarray, but got {}'.format(type(x))) | ||
|
@@ -628,9 +634,9 @@ def _get_top_function(self, x): | |
else: | ||
raise Exception('Invalid type ({}) of numpy array. Supported types are: single, float32, double, float64, float_.'.format(x0.dtype)) | ||
|
||
|
||
top_function.restype = None | ||
top_function.argtypes = [npc.ndpointer(ctype, flags="C_CONTIGUOUS") for i in range(len(xlist) + n_outputs)] | ||
top_function.argtypes = [npc.ndpointer(ctype, flags="C_CONTIGUOUS") \ | ||
for i in range(len(xlist) + (n_weights if config_weights else 0) + n_outputs)] | ||
|
||
return top_function, ctype | ||
|
||
|
@@ -654,10 +660,16 @@ def _compute_n_samples(self, x): | |
return int(n_sample) | ||
|
||
def predict(self, x): | ||
|
||
io_type = self.config.get_config_value('IOType') | ||
interface = self.config.get_config_value('AcceleratorConfig')['Interface'] if self.config.get_config_value('AcceleratorConfig') else None | ||
config_weights = (io_type == 'io_stream') and (interface == 'axi_master') | ||
Comment on lines
+664
to
+666
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Same comment as above |
||
|
||
top_function, ctype = self._get_top_function(x) | ||
n_samples = self._compute_n_samples(x) | ||
n_inputs = len(self.get_input_variables()) | ||
n_outputs = len(self.get_output_variables()) | ||
n_weights = len(self.get_weight_variables()) | ||
|
||
curr_dir = os.getcwd() | ||
os.chdir(self.config.get_output_dir() + '/firmware') | ||
|
@@ -675,10 +687,16 @@ def predict(self, x): | |
inp = [np.asarray(xj[i]) for xj in x] | ||
argtuple = inp | ||
argtuple += predictions | ||
if config_weights: | ||
for j in range(n_weights): | ||
weights = [float(w) for w in self.get_weight_variables()[j]] | ||
argtuple += [np.asarray(weights)] | ||
argtuple = tuple(argtuple) | ||
top_function(*argtuple) | ||
output.append(predictions) | ||
|
||
if config_weights and n_samples == 1 and n_inputs: | ||
output.append([predictions]) | ||
else: | ||
output.append(predictions) | ||
|
||
# Convert to list of numpy arrays (one for each output) | ||
output = [np.asarray([output[i_sample][i_output] for i_sample in range(n_samples)]) for i_output in range(n_outputs)] | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -6,7 +6,7 @@ def match(self, node): | |
cast = False | ||
if isinstance(node, Activation): | ||
cast = node.get_input_variable().type.precision != node.get_output_variable().type.precision | ||
return isinstance(node, Activation) and node.get_attr('activation') == 'linear' and not cast | ||
return isinstance(node, Activation) and node.get_attr('activation') == 'linear' # and not cast | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. ? I don't think this should be included in this PR There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. That was a quick and dirty hack to get some models to optimize better, but it really was meant only for that branch, where we had checked the correctness. I agree that it should not be in the PR. |
||
|
||
def transform(self, model, node): | ||
model.remove_node(node) | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -22,8 +22,9 @@ | |
#include <vector> | ||
#include <map> | ||
#include <stdio.h> | ||
#include <stdlib.h> | ||
#include <math.h> | ||
#include <cstdlib> | ||
#include <cmath> | ||
#include <cfloat> | ||
Comment on lines
-25
to
+27
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. ? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I had to include When I was at it I changed to |
||
|
||
#include "firmware/myproject.h" | ||
#include "firmware/nnet_utils/nnet_helpers.h" | ||
|
@@ -56,6 +57,10 @@ int main(int argc, char **argv) | |
std::string pline; | ||
int e = 0; | ||
|
||
//hls-fpga-machine-learning insert weights | ||
|
||
//hls-fpga-machine-learning insert load weights | ||
|
||
if (fin.is_open() && fpr.is_open()) { | ||
while ( std::getline(fin,iline) && std::getline (fpr,pline) ) { | ||
if (e % CHECKPOINT == 0) std::cout << "Processing input " << e << std::endl; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Two things:
assert
somewhere to allow only combinations of those other parameters that have been implemented for (ie right now it also depends on the board)ModelGraph
as it's backend specificThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree. This was mostly a placeholder, and I was going to ask what should be better.
As in the main thread, do we already have a matrix or list of the existing possible combinations (what goes with what)? I am not sure how to properly set up the configuration/trigger. Do you have suggestions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I started putting together a document, which is far from being final. @thesps and @jmitrevs should be able to edit. Not sure if that may help or if it is better to keep it to the comments here. I bit of both I guess.
If I understand it right the additional configuration parameters (to enable programmable weights) should be passed via the function create_initial_config for the
VivadoAccelerator
backend.If you agree, then I would add:
configurable_weights
bool
, optionalFalse
weight_type
dict
, optionalfloat
or anap_type
); if the type is not specified for a layer, then it defaults tofloat
float
Please let me know if that makes sense.