-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Doubt about hardware use #14
Comments
I have not realized this phenomenon. What is your hardware device? I use the standard PyTorch implementation of IFRNet and test it on the NVIDIA Tesla V100 and NVIDIA RTX 2080Ti GPUs. The phenomenon you described may depend on the concrete implementation library and hardware device. |
In reality I would not know how to correctly describe the error. My hardware is a 14 Core 20 Thread i7-12700H, an RTX 3070 Ti, 32GB DDR5 in Quad Channel and 2TB of 2 Western Digital Black SN850 Drives. The implementation I use is the standard one (Recommended in the repository) under Arch Linux (I also tried it on Ubuntu 22.04), both Linux use Kernel 5.19, and also on Windows using Waifu2x GUI. |
Thanks for your reply. Will the above phenomenon affect your normal use of IFRNet? I think your question is some professional. I can not give you a specific answer since I am not familiar with VRAM and Core Frequency. I suggest that you can consult a person familiar with the underlying implementation. |
I have been using this interpolation model and the results have seemed quite good, but I have grown quite a doubt.
Does this model benefit from the use of VRAM or Core Frequency? Because I usually see that the GPU Core is at 0%, but the VRAM does heat up with the use of this model.
The text was updated successfully, but these errors were encountered: