Skip to content

DakeQQ/F5-TTS-ONNX

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 

Repository files navigation


F5-TTS-ONNX

Run F5-TTS using ONNX Runtime for efficient and flexible text-to-speech processing.

Updates

  • 2025/1/10 Update: The code has been updated to support the latest version of SWivid/F5-TTS, enabling successful export to ONNX format. Resolved issues with missing Python package imports. If you encountered errors with previous versions, please download the latest code and try again.
  • The latest version accepts audio in int16 format (short) and also outputs in int16 format. The previous version supported the float format, but it is no longer supported in the current Inference.py.
  • The CUDAExecutionProvider isn't working with float16 due to an unknown issue but functions correctly with float32.

Features

  1. AMD GPU + Windows OS:

    • Easy solution using ONNX-DirectML for AMD GPUs on Windows.
    • Install ONNX Runtime DirectML:
      pip install onnxruntime-directml --upgrade
  2. CPU Only:

    • For users with 'CPU only' setups, including Intel or AMD, you can try using ['OpenVINOExecutionProvider'] and adding provider_options for a slight performance boost of around 5~20%.
    • provider_options = [{
         'device_type' : 'CPU',
         'precision' : 'ACCURACY',
         'num_of_threads': MAX_THREADS,
         'num_streams': 1,
         'enable_opencl_throttling' : True,
         'enable_qdq_optimizer': False
      }]
    • Remember pip uninstall onnxruntime-gpu and pip uninstall onnxruntime-directml first. Next pip install onnxruntime-openvino --upgrade.
  3. Intel OpenVINO:

    • If you are using a recent Intel chip, you can try ['OpenVINOExecutionProvider'] with provider_options 'device_type': 'XXX', where XXX can be one of the following options: (No guarantee that it will work or function well)
      • CPU
      • GPU
      • NPU
      • AUTO:NPU,CPU
      • AUTO:NPU,GPU
      • AUTO:GPU,CPU
      • AUTO:NPU,GPU,CPU
      • HETERO:NPU,CPU
      • HETERO:NPU,GPU
      • HETERO:GPU,CPU
      • HETERO:NPU,GPU,CPU
    • Remember pip uninstall onnxruntime-gpu and pip uninstall onnxruntime-directml first. Next pip install onnxruntime-openvino --upgrade.
  4. Simple GUI Version:

  5. NVIDIA TensorRT Support:

    • For NVIDIA GPU optimization with TensorRT, visit:
      F5-TTS-TRT
  6. Download

Learn More


F5-TTS-ONNX

通过 ONNX Runtime 运行 F5-TTS,实现高效灵活的文本转语音处理。

更新

  • 2025/1/10 更新:代码已更新以支持最新版本的 SWivid/F5-TTS,成功导出为 ONNX 格式。修复了Python包导入丢失的问题。如果您之前遇到错误,请下载最新代码并重试。
  • 最新版本接收的音频格式为 int16(short),输出也是 int16 格式。上一版本支持 float 格式,但在当前的 Inference.py 中已不再支持。
  • CUDAExecutionProvider 由于未知原因无法正常支持 float16,但可以正常使用 float32。

功能

  1. AMD GPU + Windows 操作系统

    • 针对 AMD GPU 的简单解决方案,通过 ONNX-DirectML 在 Windows 上运行。
    • 安装 ONNX Runtime DirectML:
      pip install onnxruntime-directml --upgrade
  2. 仅CPU:

    • 对于仅使用CPU的用户(包括Intel或AMD),可以尝试使用['OpenVINOExecutionProvider']并添加provider_options,以获得大约5~20%的性能提升。
    • 示例代码:
      provider_options = [{
         'device_type': 'CPU',
         'precision': 'ACCURACY',
         'num_of_threads': MAX_THREADS,
         'num_streams': 1,
         'enable_opencl_throttling': True,
         'enable_qdq_optimizer': False
      }]
    • 请记得先执行 pip uninstall onnxruntime-gpu and pip uninstall onnxruntime-directml。 接下来 pip install onnxruntime-openvino --upgrade
  3. Intel OpenVINO:

    • 如果您使用的是近期的Intel芯片,可以尝试['OpenVINOExecutionProvider'],并设置provider_options中的'device_type': 'XXX',其中XXX可以是以下选项之一: (不能保证其能够正常运行或运行良好)
      • CPU
      • GPU
      • NPU
      • AUTO:NPU,CPU
      • AUTO:NPU,GPU
      • AUTO:GPU,CPU
      • AUTO:NPU,GPU,CPU
      • HETERO:NPU,CPU
      • HETERO:NPU,GPU
      • HETERO:GPU,CPU
      • HETERO:NPU,GPU,CPU
    • 请记得先执行 pip uninstall onnxruntime-gpu and pip uninstall onnxruntime-directml。 接下来 pip install onnxruntime-openvino --upgrade
  4. 简单的图形界面版本

  5. 支持 NVIDIA TensorRT

    • 针对 NVIDIA GPU 的 TensorRT 优化,请访问:
      F5-TTS-TRT
  6. Download

了解更多


About

Running the F5-TTS by ONNX Runtime

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages