You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
more I think about this more I think we need to stick to just one. as we have learned, making even one work reliably is super hard, there are many bugs related to one specific LLMs. So, instead of rebuilding something like CrewAI we can be MUCH better but only for one very specific LLM :) that this will be our market to target, users of Gemini who want to have better agents building framework
To mitigate recent failures, we can probably implement a series of test cases, which would check that responses from a model suites for the toolkit and ratio of failures.
For the current project state it would test prompts and Gemini responses adequacy. For the potential future models integrations, it would be a prerequisite.
Context:
Current implementation supports only GCP, which is not fair for other vendors. :)
Task:
Result:
User should be able to configure and choose LLM platform.
The text was updated successfully, but these errors were encountered: