-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Option to use other REST APIs besides Closed OpenAI #1
Comments
I'm happy to have other models. The code is abstracted enough to handle other LLMs, just subclass the LLM abstract class. I haven't worked with anything but openai though. Want to start a PR with your chosen model? |
My backlog is huge but I suppose I can double dip with some other library I was working on, it only does NVIDIA HuggingFace but I have code to make it work for Apple Silicon and been meaning to integrate that (but have no such silicon so we'd need GitHub actions to test it), and adding groq and Gemini and mistral is probably also doable given it's a standard REST api, main hurdle would be if it's doing something fancy, are you using function calling ? Can you point me to the part of the code where you define the API you want to use to invoke the models? |
Sure, here is the LLM abstract base class. If you make another file in that folder like Not using function calling either. The prompts are pretty simple. You can see the OpenAI prompts here. We will likely just copy them verbatim to a |
alright im on it, some questions popped up:
unfortunately learned gemini API is in the same boat as anthropic and openai, can't use for work cuz it's got a customer noncompete, so i moved the Gemini(LLM) and Anthropic(LLM) into, ahem, I can't test MLX, so we either need you to have a macbook pro or we need to set up github actions, or both here's roughly what i got done so far |
Wow cool! I would say the minimum python should be 3.11 (from pyproject.toml) Re local LLMs, I am unable to run them. I don't have good hardware. I would want to make the local LLMs optional by default, so that |
So I was chatting with a friend and they pointed me in the direction of vllm as an interface to local llms. Apparently it handles obtaining models and running local servers, as well as providing a standardized interface to those local models. Do you have any experience with it? https://docs.vllm.ai/en/latest/getting_started/quickstart.html#using-openai-chat-api-with-vllm |
haven't used it but those docs look great! we can absolutely include this as one of the services and looks like that could help support a wider variety of hardware and models |
It would be nice for this to work with Groq's llama 3.1 models. |
Oh hell yes, that's really easy, my bad for not checking sooner I'm just in major crunch time, here let me find the code for that |
In the HuggingFaceModelConfig Enum, add a variant with the model name and context limit. Make that the default
Super easy! |
Thanks! |
Looks like a good project and I would like to try this, would it be possible to add HF / Gemini options? HF has more diversity and Gemini is smarter, neither has "customer noncompete clause" like this:
<rant>
Unfortunately I can't accept this term because it's extremely unsafe longterm and also explicitly anticompetitive, are you down with never being able to train on your chat logs or use manifest to develop models that compete with a company as broadly competitive as OpenAI?
maybe it works for some, obviously the service is popular, I used to tolerate it, but once I realized the seriousness of this clause, I felt compelled to delete my OpenAI account, it's truly hypocritical of them to learn from us and turn around and claim we cannot learn from them.
I meekly suggest switching to HF / Gemini and deleting all support for OpenAI or at least not making this legal license your default as it isn't sensible to encourage all your users to incur such a prohibition on learning, the AI could respond with some serious issue and they can't use that to prevent future unsafe activity from their own AIs, it truly does create a bad situation, and I encourage everyone who reads this to divest OpenAI ASAP, for the sake of basic intellectual freedom.
</rant>
The text was updated successfully, but these errors were encountered: