diff --git a/README.md b/README.md index 330fe9ee..0e6b50db 100644 --- a/README.md +++ b/README.md @@ -20,6 +20,7 @@ - **Local UI:** Streamlit for interactive model deployment and testing ## Latest News 🔥 + - Support Nexa AI's own vision language model (0.9B parameters): `nexa run omnivision` and audio language model (2.9B parameters): `nexa run omniaudio` - Support audio language model: `nexa run qwen2audio`, **we are the first open-source toolkit to support audio language model with GGML tensor library.** - Support iOS Swift binding for local inference on **iOS mobile** devices. @@ -32,13 +33,13 @@ Welcome to submit your requests through [issues](https://github.com/NexaAI/nexa- ## Install Option 1: Executable Installer
@@ -205,18 +206,18 @@ pip install -e . Below is our differentiation from other similar tools: -| **Feature** | **[Nexa SDK](https://github.com/NexaAI/nexa-sdk)** | **[ollama](https://github.com/ollama/ollama)** | **[Optimum](https://github.com/huggingface/optimum)** | **[LM Studio](https://github.com/lmstudio-ai)** | -| -------------------------- | :------------------------------------------------: | :--------------------------------------------: | :---------------------------------------------------: | :---------------------------------------------: | -| **GGML Support** | ✅ | ✅ | ❌ | ✅ | -| **ONNX Support** | ✅ | ❌ | ✅ | ❌ | -| **Text Generation** | ✅ | ✅ | ✅ | ✅ | -| **Image Generation** | ✅ | ❌ | ❌ | ❌ | -| **Vision-Language Models** | ✅ | ✅ | ✅ | ✅ | -| **Audio-Language Models** | ✅ | ❌ | ❌ | ❌ | -| **Text-to-Speech** | ✅ | ❌ | ✅ | ❌ | -| **Server Capability** | ✅ | ✅ | ✅ | ✅ | -| **User Interface** | ✅ | ❌ | ❌ | ✅ | -| **Executable Installation** | ✅ | ✅ | ❌ | ✅ | +| **Feature** | **[Nexa SDK](https://github.com/NexaAI/nexa-sdk)** | **[ollama](https://github.com/ollama/ollama)** | **[Optimum](https://github.com/huggingface/optimum)** | **[LM Studio](https://github.com/lmstudio-ai)** | +| --------------------------- | :------------------------------------------------: | :--------------------------------------------: | :---------------------------------------------------: | :---------------------------------------------: | +| **GGML Support** | ✅ | ✅ | ❌ | ✅ | +| **ONNX Support** | ✅ | ❌ | ✅ | ❌ | +| **Text Generation** | ✅ | ✅ | ✅ | ✅ | +| **Image Generation** | ✅ | ❌ | ❌ | ❌ | +| **Vision-Language Models** | ✅ | ✅ | ✅ | ✅ | +| **Audio-Language Models** | ✅ | ❌ | ❌ | ❌ | +| **Text-to-Speech** | ✅ | ❌ | ✅ | ❌ | +| **Server Capability** | ✅ | ✅ | ✅ | ✅ | +| **User Interface** | ✅ | ❌ | ❌ | ✅ | +| **Executable Installation** | ✅ | ✅ | ❌ | ✅ | ## Supported Models & Model Hub @@ -257,25 +258,37 @@ Supported model examples (full list at [Model Hub](https://nexa.ai/models)): | [bark-small](https://nexa.ai/suno/bark-small/gguf-fp16/readme) | Text-to-Speech | GGUF | `nexa run bark-small:fp16` | ## Run Models from 🤗 HuggingFace or 🤖 ModelScope + You can pull, convert (to .gguf), quantize and run [llama.cpp supported](https://github.com/ggerganov/llama.cpp#description) text generation models from HF or MS with Nexa SDK. + ### Run .gguf File + Use `nexa run -hf