-
Notifications
You must be signed in to change notification settings - Fork 259
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Podcastfy not working with local model #215
Comments
Hi thanks for your question. Unfortunately, there are no guarantees when it
comes to open and particularly small models.
However, I do acknowledge that integration with outlines to guarantee
structured output would greatly increase reliability as demonstrated here:
https://www.souzatharsis.com/tamingLLMs/notebooks/structured_output.html#outlines
1.
Potential instability: Local models may produce less consistent or
stable outputs compared to well-tested private models oftentimes producing
transcripts that cannot be used for podcast generation (TTS) out-of-the-box
…On Fri, Dec 20, 2024 at 3:32 PM Davide Garcia Civiero < ***@***.***> wrote:
Hello, when I try to use Podcastfy with local model (TinyLlama for
example), it don't generate the right formatting, and sometimes it don't
generate anything.
I don't have this issue with OpenAI model (GPT-4o), so can it be related
to the context window ? I know TinyLlama has only 2048 tokens context
window.
Or is it something else ?
Thank you in advance for your help and for your tool.
—
Reply to this email directly, view it on GitHub
<#215>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADTMY3KPSNZO4ZT6X5WR2GT2GRPCLAVCNFSM6AAAAABT7UFXHCVHI2DSMVQWIX3LMV43ASLTON2WKOZSG42TGMRWGE2DKMI>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Hi, |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hello, when I try to use Podcastfy with local model (TinyLlama for example), it don't generate the right formatting, and sometimes it don't generate anything.
I don't have this issue with OpenAI model (GPT-4o), so can it be related to the context window ? I know TinyLlama has only 2048 tokens context window.
Or is it something else ?
Thank you in advance for your help and for your tool.
The text was updated successfully, but these errors were encountered: