-
Notifications
You must be signed in to change notification settings - Fork 253
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide support for more than two models and provide a training guide. #23
Comments
Hi, thanks for raising this! That's definitely something we've been thinking about for while. We did our initial research focusing on only 2 models as a start, but more research is required to build routers that work well with multiple models. |
Hey @iojw , would the team need some help in this regard? I could help with benchmarks and reporting results and/or write code to extend this feature. What do you say? |
This is the direction I want to go personally. Please share your thoughts on testing and implementation. Are you already working on it or not yet? |
@bitnom currently working on more than 2 models, you can see some of our results and read more about it here: https://tryplurally.com/. |
Hi @villqrd Thanks for sharing the update! I was eager to check out the results and additional information, but it seems that the link provided (https://tryplurally.com/) directs to the main page without any specific details on the models you mentioned. Could you please clarify if there's a specific section or page we should visit to access more detailed information? I'm quite intrigued by the work you’re doing and would love to learn more about the models and results you've been working on. Thank you! |
It looks like this only supports two models, a strong and a weak model. But there are other things to consider like if privacy is a concern, or if the question is math heavy, or if the question has a visual element, etc.
Why not have a RouteLLM that could route to several arbitrary models (including local, self-hosted, or models as a service like GPT4).
And provide some example training scripts and/or a training guide that we could use to fine tune this.
The text was updated successfully, but these errors were encountered: