-
Notifications
You must be signed in to change notification settings - Fork 236
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pure torch implementation #433
base: main
Are you sure you want to change the base?
Conversation
Thank you for your great work! In the code [torch_df_offline.py], sample_rate is set to 48000, When sample_rate=16000, what should self.erb_indices be set to? |
here is original code, you can calculate feature bank using it DeepFilterNet/libDF/src/lib.rs Line 62 in 59789e1
|
Thank you. Problem solved |
Hi, thanks for this PR! I would be interested in merging it. A few things that I want to discuss:
|
|
Hey if anyone can recreate the LADSPA plugin with Onnx, please do as the single thread of tract needs a pretty big single core. |
A few notes: [features]
tract = ["deep_filter/tract", "deep_filter/default-model"] However, I am not sure if there is a way to add this as an optional dependency to the |
@Rikorose |
Hello! Can you provide more details? What code are you running? Encoder in this reimplementation takes 4 parameters
Btw this code for inference only, i didn't do anything with training. Also I didn't commit in this branch for a long time, so nothing should break if you running code with no changes. What do you mean by "build"? Do tests passing? You can watch here how to run tests |
Hello, this is because I installed the df library in the environment. The function calls are caused by the functions in the library not calling the functions you set. |
@hulucky1102 You can look here for an example on how to inference streaming version correctly Also, check that you export with |
Thank you very much, this problem is caused by me passing in attend_lim_db. Is there some setting in the model that will cause muting, which will cause some of my speech to be incoherent, so I would like to remove the muting setting in the model. |
Is it happening only with ONNX inference? Or with torch inference of streaming version too? |
This situation occurs in both onnx and torch streaming |
Hi @grazder, Thanks for your amazing work! I was hoping you could help clear up something for me. As part of your change-set, you added a new 'hidden_states' input/output tensor to Encoder, ErbDecoder, and DfDecoder (in deepfilterne3.py). And I see that in your streaming implementation, these are used as part of the state management logic. What I am confused about is that in the main branch's streaming implementation (i.e. the Rust implementation), it appears to work using these onnx models, DeepFilterNet3_onnx.tar.gz. But I don't see these 'hidden states' tensors exposed as inputs / outputs to these models. So how does the Rust implementation work without the ability to manage these? It seems like it's necessary based on your pure pytorch implementation. It's very possible that I overlooked something simple.. Thanks again! |
hi, @grazder |
Hello, we need to store two future frames for a single frame prediction, so Also you can find it here: DeepFilterNet/libDF/src/tract.rs Line 586 in f2445da
|
Thanks for your answer! @grazder Sorry to bother you again.I would like to get a tflite model that can perform real-time inference on DSP. If I use the onnx model obtained from your model_onnx_export.py file, will there be any problems when converting the onnx model to tf model? I noticed that you are using a newly registered operator. Secondly, if I want to use C language for inference on DSP, would you suggest that I use the original author's single complete model or three models? I have relatively little experience with deployment. Wish everything goes well with your work! |
I think that you cat face problems with RFFT / IRFFT, I don't know a lot about tf operations, so I can't say exactly.
Yeah, new operator is in torchDF_main branch. New operator gave me like ~x2.5 speedup. Also in this branch there are some else graph optimizations. You can use torchDF-changes branch (from this PR). In that variant RFFT / IRFFT implemented as matmul, which is suboptimal, but you I bet you will not face problems with RFFT / IRFFT export.
Well you can use C capi, you can find build in actions or you can look at actions config. Also if you want to use C, you can try onnxruntime C API |
@grazder Hallo! Thank you very much for your contribution on providing the Pure Torch code , which saved me a lot of time that would have been spent learning Rust.
|
@grazder sry to bother you! I want to quantify the model at 8 bits now, but I have only quantified the CV model. Can you give me some hints or information? Is it quantized on torch or onnx? |
Hello! I've tried to quantify with ONNX here - https://github.com/grazder/DeepFilterNet/blob/torchDF-temp/torchDF/model_onnx_export.py |
Any updates on this? |
I've created this issue about pure torch reimplementation - #430
Sharing code. This is draft PR, so right now work still in progress, and i can make some changes later.
You can find my implementation in folder
torchDF
. Also there is a README.md there with some details.I'll be glad to hear your feedback.
Also there are some changes in
deepfilternet3.py
,modules.py
,multiframe.py
. It was necessary to reach 100% compatibility of streaming tract model and offline enhance method.This fork was created based on ca46bf5. Therefore, some code may be a little outdated.
Offline model torch implementation in
torchDF/torch_df_offline.py
.Streaming model torch implementation in
torchDF/torch_df_streaming.py
To convert streaming model to onnx you can use
torchDF/model_onnx_export.py