You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As we are using the openai, sometimes the responses are long and the caller has to wait for the response to be completed. Could we use the streaming feature of the openai? If yes, how? I was trying to do it with the streaming feature but couldn’t achieve it correctly.
The text was updated successfully, but these errors were encountered:
Definitely possible to at least stream the openai response into an TTS response (I do it in my project here https://github.com/sshh12/llm-chat-web-ui/tree/main/modal. However, not sure how well the twilio API would support something like that (although haven't looked).
As we are using the openai, sometimes the responses are long and the caller has to wait for the response to be completed. Could we use the streaming feature of the openai? If yes, how? I was trying to do it with the streaming feature but couldn’t achieve it correctly.
The text was updated successfully, but these errors were encountered: