Skip to content

Commit

Permalink
feat: Integrate Ollama backend support
Browse files Browse the repository at this point in the history
- Added `ollama-rs` crate dependency for leveraging the Ollama AI service.
- Introduced `async-stream` and `tokio-stream` dependencies to support async data streaming.
- Updated `Cargo.toml` and `Cargo.lock` for new dependencies.

docs: Update documentation for Ollama integration

- Added instructions in `docs/getting_started.md` for using the Ollama backend.
- Expanded `docs/installation.md` with details on additional setup steps needed for the Ollama service, including installation and firewall configuration.

feat: Implement OllamaInterface for chat service

- Created `OllamaInterface` in `src/provider/ollama/ollama_interface.rs` with methods for interacting with the Ollama backend.
- Updated the chat service builder to handle the Ollama backend selection based on configuration.

refactor: Cleanup Model structure in configuration

- Removed unused `port` field in `Model` struct within `src/config/config_file.rs`.
- Ensured optional URL handling for dynamic configuration of service endpoints.

init: Include Ollama model configuration in default setup

- Enhanced `src/cli/init/mod.rs` to add default configurations for Ollama models in the initialization step.

This update allows for greater flexibility in choosing AI providers by supporting the Ollama service, enhancing potential use cases for Rusty Buddy.
  • Loading branch information
Christian Stolz committed Oct 6, 2024
1 parent d455727 commit 190f38a
Show file tree
Hide file tree
Showing 10 changed files with 168 additions and 5 deletions.
41 changes: 41 additions & 0 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 2 additions & 0 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ edition = "2021"
[dependencies]
reqwest = { version = "0.12", features = ["json"] }
tokio = { version = "1", features = ["full"] }
tokio-stream = "0.1"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
async-openai = "0.24.0"
Expand All @@ -30,3 +31,4 @@ ignore = "0.4"
log = "0.4"
env_logger = "0.11"
async-trait = "0.1"
ollama-rs = { version = "0.2", features = ["stream"] }
21 changes: 21 additions & 0 deletions docs/docs/getting_started.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,27 @@ Recommended persona: [Recommended Persona]
- A default `config.toml` file is generated in the `.rusty` directory.
- This file includes the recommended persona and sets default models for chat and commit message generation.

## Choosing Your AI Provider

After configuring your environment, you can select between different AI backends, including OpenAI and Ollama, depending on your needs or preferences.

### Using the Ollama Backend

To utilize Ollama, ensure your configuration in the `config.toml` specifies Ollama in the desired models:
```toml
[ai]
chat_model = "ollama_32"
commit_model = "ollama_32"
wish_model = "ollama_32"

[[models]]
name = "ollama_32"
api_name = "llama3.2"
backend = "Ollama"
url = "http://localhost:11434"

```

## Example Usage

Once your setup is complete, you can start using Rusty Buddy right away. Here are a few common scenarios:
Expand Down
14 changes: 13 additions & 1 deletion docs/docs/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,4 +46,16 @@ If you prefer to have more control over the installation or need to modify the s
- Ensure that Rust and Cargo are installed on your system. You can install them via [rustup](https://rustup.rs/).
- Network access may be required for both installation methods, particularly for downloading dependencies or connecting with the OpenAI API.

By following these instructions, you will be able to set up Rusty Buddy and harness its capability for your development workflows. Choose the installation method that aligns with your needs and system configuration.
By following these instructions, you will be able to set up Rusty Buddy and harness its capability for your development workflows. Choose the installation method that aligns with your needs and system configuration.

## Additional Requirements for Ollama

To use the Ollama feature in Rusty Buddy, you need to install and configure the Ollama service. This section explains any additional dependencies or steps required for Ollama.

### Step 1: Install Ollama

Ensure that the Ollama service is installed and running on your machine. You can follow the installation guide on the [official Ollama documentation](https://ollama.com).

### Step 2: Configure Firewall and Ports

Make sure your network allows communication through the port that Ollama uses (default is 11434).
6 changes: 5 additions & 1 deletion src/chat/service_builder.rs
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@ use crate::chat::interface::{ChatBackend, ChatStorage};
use crate::chat::service::ChatService;
use crate::config::{AIBackend, CONFIG};
use crate::persona::Persona;
use crate::provider::ollama::ollama_interface::OllamaInterface;
use crate::provider::openai::openai_interface::OpenAIInterface;
use log::debug;
use std::error::Error;
Expand Down Expand Up @@ -55,7 +56,10 @@ impl ChatServiceBuilder {
// Check which provider to use based on the model
let backend: Box<dyn ChatBackend> = match &model.backend {
AIBackend::OpenAI => Box::new(OpenAIInterface::new(model.api_name.clone())), // Additional backends can be added here
_ => return Err(format!("Unknown backend for model: {:?}", model.backend).into()),
AIBackend::Ollama => Box::new(OllamaInterface::new(
model.api_name.clone(),
model.url.clone(),
)), // New line added
};

Ok(ChatService::new(backend, storage, persona, self.directory))
Expand Down
6 changes: 6 additions & 0 deletions src/cli/init/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -126,6 +126,12 @@ backend = "OpenAI"
name = "openai_complex"
api_name = "gpt-4o-2024-08-06"
backend = "OpenAI"
[[models]]
name = "ollama_complex"
api_name = "llama3.2"
backend = "Ollama"
url = "http://localhost:11434"
"#,
recommended_persona
);
Expand Down
3 changes: 0 additions & 3 deletions src/config/config_file.rs
Original file line number Diff line number Diff line change
Expand Up @@ -33,10 +33,7 @@ pub struct AI {
pub struct Model {
pub name: String,
pub api_name: String,
#[allow(dead_code)]
pub url: Option<String>,
#[allow(dead_code)]
pub port: Option<u16>,
pub backend: AIBackend,
}

Expand Down
1 change: 1 addition & 0 deletions src/provider/mod.rs
Original file line number Diff line number Diff line change
@@ -1 +1,2 @@
pub mod ollama;
pub mod openai;
1 change: 1 addition & 0 deletions src/provider/ollama/mod.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
pub mod ollama_interface;
78 changes: 78 additions & 0 deletions src/provider/ollama/ollama_interface.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
// src/provider/ollama/ollama_interface.rs

use crate::chat::interface::{ChatBackend, Message, MessageRole};
use async_trait::async_trait;
use ollama_rs::{
generation::chat::{request::ChatMessageRequest, ChatMessage, ChatMessageResponseStream},
IntoUrlSealed, Ollama,
};
use std::error::Error;
use tokio_stream::StreamExt;

pub struct OllamaInterface {
ollama: Ollama,
model: String,
}

impl OllamaInterface {
pub fn new(model: String, ourl: Option<String>) -> Self {
let url = ourl.unwrap_or("http://localhost:11434".into());
OllamaInterface {
ollama: Ollama::from_url(url.clone().into_url().unwrap()),
model,
}
}

fn convert_messages(messages: &[Message]) -> Vec<ChatMessage> {
let mut chat_messages: Vec<ChatMessage> = Vec::new();

// Convert Message into ChatMessage for ollama
for msg in messages {
match msg.role {
MessageRole::User => {
chat_messages.push(ChatMessage::user(msg.content.clone()));
}
MessageRole::Assistant => {
chat_messages.push(ChatMessage::assistant(msg.content.clone()));
}
MessageRole::Context => {
chat_messages.push(ChatMessage::system(msg.content.clone()));
}
MessageRole::System => {
chat_messages.push(ChatMessage::system(msg.content.clone()));
}
}
}
chat_messages
}
}

#[async_trait]
impl ChatBackend for OllamaInterface {
async fn send_request(
&mut self,
messages: &[Message],
_use_tools: bool,
) -> Result<String, Box<dyn Error>> {
let chat_messages = Self::convert_messages(messages);

let request = ChatMessageRequest::new(self.model.clone(), chat_messages.clone());

let mut stream: ChatMessageResponseStream =
self.ollama.send_chat_messages_stream(request).await?;

let mut response = String::new();

while let Some(Ok(res)) = stream.next().await {
if let Some(assistant_message) = res.message {
response += &assistant_message.content;
}
}
Ok(response)
}

fn print_statistics(&self) {
// Implement statistics if required
println!("Using Ollama model: {}", self.model);
}
}

0 comments on commit 190f38a

Please sign in to comment.