Can the Assistant Accelerator be used with an Azure OpenAI fine-tuned model ? #644
Unanswered
PaulineJaquot
asked this question in
Q&A
Replies: 2 comments
-
Hi Pauline, it is possible - a few things to keep in mind :
I would also point out that, in most cases, fine-tuning can be avoided in lieu of more thorough prompt engineering ( at least in my experience ). The base prompts within Infor. Asst. are very thorough, but are intended to be 'generic' for any type of document. Tuning the prompts for your dataset could make a bigger difference in the quality of returned responses. Fine-Tuning is obviously not a 'one and done' type of effort. |
Beta Was this translation helpful? Give feedback.
0 replies
-
Great thank you, we'll keep this option as last resort :) |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
We noticed that the model is struggling extracting domain-specific information and answer specific questions on our private documents. We're considering training a model (on the Azure OpenAI Studio) on one of these specific tasks to check the potential. As long as the trained model is deployed and the specific env. variables adjusted (deployment name, resource group, etc.), Is it possible to use it in the Assistant Accelerator ? Thank you, just want to double-check before that we go intro training :)
Beta Was this translation helpful? Give feedback.
All reactions