You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
May I ask if you have tried using RAFT or other methods as the teacher network? Would the results be better? How did you pretrain LiteFlowNet on Vimeo90k, and could you provide the code for this pretraining?
The text was updated successfully, but these errors were encountered:
If replacing the teacher flow network as RAFT, the frame interpolation accuracy of IFRNet will drop. For the reason, please refer to the Task-Oriented Flow Distillation Loss in our paper. The LiteFlowNet model used in IFRNet is from https://github.com/sniklaus/pytorch-liteflownet. Thanks.
Thank you, but I couldn't find the analysis in Task-Oriented Flow Distillation Loss explaining why RAFT's teacher network is not as good as LiteFlowNet, or Pwc-Net? Additionally, I noticed that the teacher network in the RIFE paper was changed from an earlier version of LiteFlowNet to DVF. What are your thoughts on this approach?
You can refer to the paper Video Enhancement with Task-Oriented Flow (IJCV 2019). DVF is trained on the same Vimeo90K dataset while LiteFlowNet is not. Therefore using DVF's optical flow pseudo label is more helpful. It is more like an engineering trick rather than academic research.
May I ask if you have tried using RAFT or other methods as the teacher network? Would the results be better? How did you pretrain LiteFlowNet on Vimeo90k, and could you provide the code for this pretraining?
The text was updated successfully, but these errors were encountered: