Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support to use other LLM platforms #6

Open
StashaS opened this issue Sep 12, 2024 · 3 comments
Open

Add support to use other LLM platforms #6

StashaS opened this issue Sep 12, 2024 · 3 comments
Labels
far far away in a distant galaxy Long term plans medium Medium complexity task, you need to know the project

Comments

@StashaS
Copy link
Collaborator

StashaS commented Sep 12, 2024

Context:
Current implementation supports only GCP, which is not fair for other vendors. :)

Task:

  1. Split the tooklit logic to actual toolkit and client to request LLM model provider.
  2. Move clients LLM provider logic to separate packages.
  3. Add support and instructions for the popular LLM platforms.

Result:
User should be able to configure and choose LLM platform.

@StashaS StashaS added medium Medium complexity task, you need to know the project far far away in a distant galaxy Long term plans labels Sep 12, 2024
@b0noI
Copy link
Collaborator

b0noI commented Sep 30, 2024

more I think about this more I think we need to stick to just one. as we have learned, making even one work reliably is super hard, there are many bugs related to one specific LLMs. So, instead of rebuilding something like CrewAI we can be MUCH better but only for one very specific LLM :) that this will be our market to target, users of Gemini who want to have better agents building framework

@StashaS
Copy link
Collaborator Author

StashaS commented Oct 1, 2024

To mitigate recent failures, we can probably implement a series of test cases, which would check that responses from a model suites for the toolkit and ratio of failures.
For the current project state it would test prompts and Gemini responses adequacy. For the potential future models integrations, it would be a prerequisite.

@b0noI
Copy link
Collaborator

b0noI commented Oct 7, 2024

I love this idea
we can have acceptance tests
where we will define set of prompts and what is considered good enough quality

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
far far away in a distant galaxy Long term plans medium Medium complexity task, you need to know the project
Projects
None yet
Development

No branches or pull requests

2 participants