-
Notifications
You must be signed in to change notification settings - Fork 243
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactoring of MVAU and VVAU #963
Refactoring of MVAU and VVAU #963
Conversation
… and other minor changes
…d PyVerilator-related syntax change
…old layers to be by default true
…e_to_rtl transformation for MVU
… refactor/mvu_vvu
… refactor/mvu_vvu
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @mmrahorovic ! Before I merge, could you please address the comments I made and update your PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change looks like it has nothing to do with the new class hierarchy. Could you provide more information on this or take it out of the PR, please?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's correct, I'll remove it.
Nevertheless, do you think it's worth having this change upstream? My reasoning was that this particular transformation step (step_set_fifo_depths
) should only affect the FIFOs as the name suggests. By passing an appropriate node_filter
to the ApplyConfig transformation, we ensure that only StreamingFIFO
-nodes are affected. This could potentially prevent confusion/bug when the folding config file has changed after the step_apply_folding_config
step.
If so, I can create a separate PR with this change and a description.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
HLSCustomOp gets deprecated with the refactored system. Could you please take this out of the PR and if it is a necessary change, could you incorporate it into the appropriate new class (HWCustomOp, HLSBackend or RTLBackend)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will do, thanks for the feedback!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The title of the PR indicates that it also contains a refactoring of the VVAU, for this there would be more changes required than this added function. Could you either add these changes or factor the VVAU out for now and concentrate on the changes for the MVAU?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, thanks! As discussed, I'll have both the VVAU/MVAU refactoring in one PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for updating the ap int max width bound!
@@ -122,10 +133,14 @@ def get_nodeattr_types(self): | |||
# vector through the accelerator. This will get rid of any old | |||
# weight data from the weight FIFOs. | |||
"runtime_writeable_weights": ("i", False, 0, {0, 1}), | |||
} | |||
"preferred_impl_style" : ("s", False, "hls", {"hls", "rtl"}), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This node attribute doesn't need to be added here, when executing my_attrs.update(super().get_nodeattr_types())
, it is automatically inherited from HWCustomOp. Please remove this. In general, when reviewing the node attributes, the HW abstraction layer for the MatrixVectorActivation
should ideally only contain node attributes that the HLS and RTL variant of the MVAU share. Please ensure that this is the case. HLS- or RTL-specific attributes can be added in the _hls or _rtl node variants, like here: https://github.com/Xilinx/finn/blob/refactor/rtl_integration/src/finn/custom_op/fpgadataflow/rtl/streamingfifo_rtl.py#L50
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, I've removed it now. Thanks!
@@ -165,6 +180,61 @@ def infer_node_datatype(self, model): | |||
odt = self.get_output_datatype() | |||
model.set_tensor_datatype(node.output[0], odt) | |||
|
|||
def get_input_datatype(self, ind=0): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a lot of code duplication. get_input_datatype
, get_weight_datatype
, get_output_datatype
, ... are all again defined from line 510 onwards. Please remove repetitions.
@@ -728,6 +751,43 @@ def get_hls_compatible_threshold_tensor(self, orig_thres_matrix): | |||
rows between PEs is not as expected (n_thres_steps)""" | |||
return ret.reshape(1, pe, tmem, n_thres_steps) | |||
|
|||
def get_hls_compatible_weight_tensor(self, orig_weight_matrix): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see that you first deleted and then added this function again. For me this looks like an HLS-specific function and thus should be moved to matrixvectoractivation_hls.py
, was there a reason to bring it back into the hw abstraction layer?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the name of the function is misleading. It suggests it's exclusively something that belongs to the HLS custom-op, but it's actually used in the make_weight_file
method. That one is used both for the HLS and RTL custom-op and hence moved to the HW abstraction layer.
A more appropriate name would be: get_hw_compatible_weight_tensor
. If you agree, I'll go on and rename that now.
out_bias = -1 if odt_is_bipolar else self.get_nodeattr("ActVal") | ||
result = multithreshold(result, mvau_thr, out_scale, out_bias) | ||
|
||
context[node.output[0]] = result | ||
|
||
def code_generation_ipi(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This can be split into an HLS- and RTL-specific part and be moved into the HLS or RTL custom op files.
binaryXnorMode=0, | ||
noActivation=1, | ||
numInputVectors=list(mm_in_shape[:-1]), | ||
mem_mode=self.mem_mode, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As a direct result from my comments regarding the HW abstraction layer, you might need to remove some node attributes here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For now it is fine to only test the HLS implementation of the MVAU.
In general, this PR contains a lot of commits that are not relevant for the refactoring of the MVAU in HW abstraction layer and HLS variant. This might cause confusion and merge conflicts if you want to introduce these commits again at a later point in time. |
… refactor/mvu_vvu
Due to many changes introduced to undo the starting point of the branch, a new branch/PR has been created to maintain a clean history. Please see: #971 |
This PR refactors the MVAU/VVAU HLS custom-op in a HW custom-op, HLS custom-op and RTL custom-op.
Depends on
2. Support for packed MV(A)Us: PR #794.
3. Support for multi-packed DSP58s for VVUs: PR #907
WIP