You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We queue a buffer when I was training the model (maybe 500 frames), but I think the funciton compute_rnn looks like a frame-in-frame-out flow.
My question is, is there no need to prepare a buffer for inference?
Thanks,
Aaron
The text was updated successfully, but these errors were encountered:
Hi there,
rnnoise_process_frame is called while inference step,
in that function it calls compute_frame_features function which is stacking input X to comb_buf and pitch_buf.
so there's no need to add buffer in compute_rnn.
I'm not sure this is your intention of your question. feel free to leave comments more if is not correct
Thanks for your reply.
Yes, I am confused about buffer.
So we only feed a frame length data into the neural network?
Because I am trying to construct the real-time flow and the frame length is a factor related to the complexity of the model.
Yes, you need feed features(Ex[34], Exp[34], T, corr) extracted from frame length audio to neural network
it's possible to run it in realtime frame by frame
Hi Noh,
We queue a buffer when I was training the model (maybe 500 frames), but I think the funciton compute_rnn looks like a frame-in-frame-out flow.
My question is, is there no need to prepare a buffer for inference?
Thanks,
Aaron
The text was updated successfully, but these errors were encountered: