-
-
Notifications
You must be signed in to change notification settings - Fork 134
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bump up Dockerfile and compose to newer syntax format, bump to Torch 2.4.0+CUDA 12.4, updated other deps and more #307
base: main
Are you sure you want to change the base?
Conversation
…UDA 12.1 Also rename nvidia dockerfile to sort with other dockerfiles
…ch builds Also tosses out a lot of extra whitespace, you can hide those in Github diff view if needed.
…wnload.py no matter the distribution
Hi @C0rn3j Thanks for everything you have sent over! Being there are so many changes it will be a lot for me to validate. A couple of things I have noted though are that some of the changes will break certain compatibilities:
If you can let me know any thoughts/tests you did I can look a bit more in depth at what may work/break things elsewhere and then validate the setup across Windows & Linux, though this takes hours to do as there is the Standalone environments and then the Text-gen-webui environments to test. |
EDIT: Actually, I may have inadvertently fixed a CUDA mishap with the bump, as the 12.4 container reports as 12.5 in nvidia-smi but 12.3.1 that whisper.cpp uses (but they end up using runtime and not devel though) reports as 12.3, which breaks on my system. Building the current old Dockerfile and executing Going to try |
Now on CUDA 12.4, Torch 2.4.0, newer Docker compose syntax, cleaner Dockerfile (including correctly creating layers and cleaning up garbage from apt), and heredoc for readability.
Also rename nvidia dockerfile to sort with other dockerfiles.
Built the
Dockerfile
and randocker-compose.yaml
swapping the image for my built version, seems to run fine, including Nvidia support on my 4000 series card.I do not get the purpose of the Nvidia dockerfile+standalone reqs, and left those mostly alone, the main Dockerfile already supports Nvidia. Shouldn't these be just deleted?
Note that there are quite a few warnings present at the moment.
Some files may still contain references to 11.8/12.1 CUDA.
I am currently running CUDA 12.5 on host, and a 12.4 container with 12.4 libraries(+an extra apt dep for 11.8 compat) in this setup - it seems to generate and Analyze TTS just fine.
Also has an extra fix of sorting the WebUI voices.
EDIT: I've pushed
c0rn3j/alltalk:1.9.c.1
on Docker Hub, if anyone is interested in the changeset.