Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Documentation, streaming example using openai has type error #10003

Open
1 task done
eaubin opened this issue Nov 19, 2024 · 1 comment · May be fixed by #10025
Open
1 task done

Documentation, streaming example using openai has type error #10003

eaubin opened this issue Nov 19, 2024 · 1 comment · May be fixed by #10025
Assignees
Labels
bug Something isn't working docs/website Related to documentation or website

Comments

@eaubin
Copy link

eaubin commented Nov 19, 2024

Describe the bug

Running the code at A streaming example using openai and typing anything and pressing send gives a runtime error caused by

.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1059, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "Invalid type for 'messages[0]': expected an object, but got a string instead.", 'type': 'invalid_request_error', 'param': 'messages[0]', 'code': 'invalid_type'}}

Have you searched existing issues? 🔎

  • I have searched and found no existing issues

Reproduction

import os
from openai import OpenAI
import gradio as gr

api_key = os.environ['OPENAI_API_KEY']
client = OpenAI(api_key=api_key)

def predict(message, history):
    history_openai_format = []
    for msg in history:
        history_openai_format.append(msg)
    history_openai_format.append(message)
  
    response = client.chat.completions.create(model='gpt-3.5-turbo',
    messages= history_openai_format,
    temperature=1.0,
    stream=True)

    partial_message = ""
    for chunk in response:
        if chunk.choices[0].delta.content is not None:
              partial_message = partial_message + chunk.choices[0].delta.content
              yield partial_message

gr.ChatInterface(predict, type="messages").launch()

Screenshot

No response

Logs

✦ ❯ uv run ./demo.py       
* Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Traceback (most recent call last):
  File ".venv/lib/python3.13/site-packages/gradio/queueing.py", line 624, in process_events
    response = await route_utils.call_process_api(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    ...<5 lines>...
    )
    ^
  File ".venv/lib/python3.13/site-packages/gradio/route_utils.py", line 323, in call_process_api
    output = await app.get_blocks().process_api(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    ...<11 lines>...
    )
    ^
  File ".venv/lib/python3.13/site-packages/gradio/blocks.py", line 2015, in process_api
    result = await self.call_function(
             ^^^^^^^^^^^^^^^^^^^^^^^^^
    ...<8 lines>...
    )
    ^
  File ".venv/lib/python3.13/site-packages/gradio/blocks.py", line 1574, in call_function
    prediction = await utils.async_iteration(iterator)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File ".venv/lib/python3.13/site-packages/gradio/utils.py", line 710, in async_iteration
    return await anext(iterator)
           ^^^^^^^^^^^^^^^^^^^^^
  File ".venv/lib/python3.13/site-packages/gradio/utils.py", line 815, in asyncgen_wrapper
    response = await iterator.__anext__()
               ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File ".venv/lib/python3.13/site-packages/gradio/chat_interface.py", line 678, in _stream_fn
    first_response = await async_iteration(generator)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File ".venv/lib/python3.13/site-packages/gradio/utils.py", line 710, in async_iteration
    return await anext(iterator)
           ^^^^^^^^^^^^^^^^^^^^^
  File ".venv/lib/python3.13/site-packages/gradio/utils.py", line 704, in __anext__
    return await anyio.to_thread.run_sync(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        run_sync_iterator_async, self.iterator, limiter=self.limiter
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    )
    ^
  File ".venv/lib/python3.13/site-packages/anyio/to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        func, args, abandon_on_cancel=abandon_on_cancel, limiter=limiter
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    )
    ^
  File ".venv/lib/python3.13/site-packages/anyio/_backends/_asyncio.py", line 2441, in run_sync_in_worker_thread
    return await future
           ^^^^^^^^^^^^
  File ".venv/lib/python3.13/site-packages/anyio/_backends/_asyncio.py", line 943, in run
    result = context.run(func, *args)
  File ".venv/lib/python3.13/site-packages/gradio/utils.py", line 687, in run_sync_iterator_async
    return next(iterator)
  File "./demo.py", line 16, in predict
    response = client.chat.completions.create(model='gpt-3.5-turbo',
    messages= history_openai_format,
    temperature=1.0,
    stream=True)
  File ".venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 275, in wrapper
    return func(*args, **kwargs)
  File ".venv/lib/python3.13/site-packages/openai/resources/chat/completions.py", line 829, in create
    return self._post(
           ~~~~~~~~~~^
        "/chat/completions",
        ^^^^^^^^^^^^^^^^^^^^
    ...<39 lines>...
        stream_cls=Stream[ChatCompletionChunk],
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    )
    ^
  File ".venv/lib/python3.13/site-packages/openai/_base_client.py", line 1278, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File ".venv/lib/python3.13/site-packages/openai/_base_client.py", line 955, in request
    return self._request(
           ~~~~~~~~~~~~~^
        cast_to=cast_to,
        ^^^^^^^^^^^^^^^^
    ...<3 lines>...
        retries_taken=retries_taken,
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    )
    ^
  File ".venv/lib/python3.13/site-packages/openai/_base_client.py", line 1059, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "Invalid type for 'messages[0]': expected an object, but got a string instead.", 'type': 'invalid_request_error', 'param': 'messages[0]', 'code': 'invalid_type'}}

System Info

Gradio Environment Information:
------------------------------
Operating System: Darwin
gradio version: 5.6.0
gradio_client version: 1.4.3

------------------------------------------------
gradio dependencies in your environment:

aiofiles: 23.2.1
anyio: 4.6.2.post1
audioop-lts: 0.2.1
fastapi: 0.115.5
ffmpy: 0.4.0
gradio-client==1.4.3 is not installed.
httpx: 0.27.2
huggingface-hub: 0.26.2
jinja2: 3.1.4
markupsafe: 2.1.5
numpy: 2.1.3
orjson: 3.10.11
packaging: 24.2
pandas: 2.2.3
pillow: 11.0.0
pydantic: 2.9.2
pydub: 0.25.1
python-multipart==0.0.12 is not installed.
pyyaml: 6.0.2
ruff: 0.7.4
safehttpx: 0.1.1
semantic-version: 2.10.0
starlette: 0.41.3
tomlkit==0.12.0 is not installed.
typer: 0.13.1
typing-extensions: 4.12.2
urllib3: 2.2.3
uvicorn: 0.32.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.


gradio_client dependencies in your environment:

fsspec: 2024.10.0
httpx: 0.27.2
huggingface-hub: 0.26.2
packaging: 24.2
typing-extensions: 4.12.2
websockets: 12.0

Severity

I can work around it

@eaubin eaubin added the bug Something isn't working label Nov 19, 2024
@abidlabs abidlabs added the docs/website Related to documentation or website label Nov 19, 2024
@abidlabs abidlabs self-assigned this Nov 19, 2024
@abidlabs
Copy link
Member

Thanks for flagging, we'll fix that

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working docs/website Related to documentation or website
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants