You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Additional to the hardcoded models, add support to use local models as judges for evaluation. Can be simplified to require the OpenAI API.
Should be basically an endpoint selection, probably the Azure hosted pipeline can be extended to cover this. If already working, add documentation on how this can be done.
Motivation
Local development and evals
The text was updated successfully, but these errors were encountered:
One way this is already supported in the current version:
Have an endpoint running that supports the OpenAI API format, specifically chat.completions.
Start LLM Studio with environment variable to point to that endpoint: OPENAI_API_BASE="http://111.111.111.111:8000/v1"
Validate correct usage in "Settings page". Note that "Use Azure" must be off, and the environment variable that was set above should be visible below. Changing it here has no effect! This is only for testing the correct setting of the environment variable.
Run an experiment with GPT metric and use the correct model name at your endpoint:
Calls to the LLM judge are now directed to your own LLM endpoint
🚀 Feature
Additional to the hardcoded models, add support to use local models as judges for evaluation. Can be simplified to require the OpenAI API.
Should be basically an endpoint selection, probably the Azure hosted pipeline can be extended to cover this. If already working, add documentation on how this can be done.
Motivation
Local development and evals
The text was updated successfully, but these errors were encountered: