This project demonstrates how to integrate FastAPI with OpenAI's Assistant API, utilizing Server-Sent Events (SSE) for real-time streaming of responses. The application allows creating interactive sessions with an OpenAI Assistant, handling real-time data streaming, and showcasing how asynchronous communication can enhance user interaction.
- Assistant Setup: Configures OpenAI Assistant API.
- Thread Management: Create and manage conversation threads.
- Real-time Streaming: Utilize SSE for streaming from the OpenAI Assistant.
- Functionality Extension: Placeholder for future function calling integration.
- Python 3.10+
- FastAPI
- Uvicorn (for running the application)
- OpenAI Python client
Clone the repository:
git clone https://github.com/xbreid/fastapi-assistant-streaming.git
cd fastapi-assistant-streaming
Install required packages:
pip install -r requirements.txt
Set up environment variables: Create a .env file in the project root directory and add your OpenAI API key and Assistant ID:
OPENAI_API_KEY=your_openai_api_key_here
OPENAI_ASSISTANT_ID=your_assistant_id_here
Start the FastAPI Development Server:
fastapi dev main.py
Check the Health:
curl -X 'GET' --url 'http://localhost:8000/'
Get the Assistant:
curl -X 'GET' --url 'http://localhost:8000/api/v1/assistant'
Create a thread:
curl -X POST http://localhost:8000/api/v1/assistant/threads -H "Content-Type: application/json"
Send a message:
curl -N -X POST \
-H "Accept: text/event-stream" -H "Content-Type: application/json" \
-d '{"text": "Hello! Please introduce yourself", "thread_id": "thread_abc123" }' \
http://localhost:8000/api/v1/assistant/chat
Contributions are always welcome!
Distributed under the MIT License. See LICENSE for more information.