Skip to content

Commit

Permalink
feat: startup info; update Dockerfile&&requirements; pretty log&&error (
Browse files Browse the repository at this point in the history
#28)

* 换回了体积更新的docker镜像
* 过滤不重要的log输出
* 添加启动提示
* 更新README
  • Loading branch information
KenyonY authored May 13, 2023
1 parent 62d184e commit 6d5c184
Show file tree
Hide file tree
Showing 13 changed files with 124 additions and 59 deletions.
4 changes: 2 additions & 2 deletions .env
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
LOG_CHAT=true
LOG_CHAT=True
OPENAI_BASE_URL=https://api.openai.com
OPENAI_API_KEY=
PASSWORD=
FORWARD_KEY=
ROUTE_PREFIX=
IP_WHITELIST=
IP_BLACKLIST=
2 changes: 1 addition & 1 deletion .github/workflows/docker-publish.yml
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ jobs:
with:
context: .
file: ./docker/Dockerfile
platforms: linux/amd64
platforms: linux/amd64,linux/arm64
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
Expand Down
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
.idea/
.vscode/
.DS_Store
third-party/
run.sh
ssl/
Expand Down
29 changes: 18 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,30 +40,37 @@
本项目用于解决一些地区无法直接访问OpenAI的问题,将该服务部署在可以正常访问openai
api的服务器上,通过该服务转发OpenAI的请求。即搭建反向代理服务

---

测试访问:https://caloi.top/openai/v1/chat/completions
或者说 https://caloi.top/openai 等价于 https://api.openai.com

---

# 目录

- [特点](#特点)
- [功能](#功能)
- [应用](#应用)
- [安装部署](#安装部署)
- [服务调用](#服务调用)
- [配置选项](#配置选项)
- [聊天日志](#聊天日志)
- [高级配置](#高级配置)

# 特点

# 功能
**基础功能**
- [x] 支持转发OpenAI所有接口
- [x] 支持流式响应
- [x] 支持指定转发路由前缀
- [x] docker部署
- [x] pip 安装部署

**高级功能**
- [x] 实时记录聊天记录(包括流式响应的聊天内容)
- [x] 支持默认openai api key(多api key 循环调用)
- [x] 自定义forward api key 代替 openai api key (见高级配置)
- [x] docker部署
- [x] 支持指定转发路由前缀
- [x] 支持请求IP验证
- [x] 支持请求IP验证(IP白名单与黑名单)

测试访问:https://caloi.top/openai/v1/chat/completions 将等价于 https://api.openai.com/v1/chat/completions
或者说 https://caloi.top/openai 等价于 https://api.openai.com
# 应用

> 这里以个人使用该项目搭建好的代理服务 https://caloi.top/openai 为例
Expand Down Expand Up @@ -123,7 +130,7 @@ curl --location 'https://caloi.top/openai/v1/images/generations' \
选择一种即可

## pip
pip的安装方式目前在使用nginx反向代理时存尚存在Bug, 建议使用Docker方式部署。

**安装**

```bash
Expand Down Expand Up @@ -201,7 +208,7 @@ http://{ip}:{port}/v1/chat/completions
| --workers | 工作进程数 | 1 |

**环境变量配置项**
参考项目根目录下`.env`文件
支持从运行目录下的`.env`文件中读取:

| 环境变量 | 说明 | 默认值 |
|-----------------|-----------------------------------------------------------------|:------------------------:|
Expand Down Expand Up @@ -245,7 +252,7 @@ FORWARD_KEY=fk-****** # 这里fk-token由我们自己定义
```bash
curl https://caloi.top/openai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer fk-******" \
-H "Authorization: Bearer fk-mytoken-abcd" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [{"role": "user", "content": "Hello!"}]
Expand Down
30 changes: 19 additions & 11 deletions README_EN.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,12 @@
</p>
This project is designed to solve the problem of some regions being unable to directly access OpenAI. The service is deployed on a server that can access the OpenAI API, and OpenAI requests are forwarded through the service, i.e. a reverse proxy service is set up.

---

Test access: https://caloi.top/openai/v1/chat/completions
To put it another way, https://caloi.top/openai is equivalent to https://api.openai.com.

---

# Table of Contents

Expand All @@ -65,17 +71,19 @@ This project is designed to solve the problem of some regions being unable to di

# Features

- [x] Supports forwarding of all OpenAI interfaces
- [x] Streaming Response
- [x] Real-time recording of chat records (including the chat contents of streaming responses).
- [x] Supports default API key (cyclic call with multiple API keys)
- [x] Use custom forward API key instead of OpenAI API key (see advanced configuration).
- [x] Docker deployment
- [x] Support for specifying the forwarding routing prefix
- [x] Request IP verification

Test access: https://caloi.top/openai/v1/chat/completions is equivalent to https://api.openai.com/v1/chat/completions
Or, to put it another way, https://caloi.top/openai is equivalent to https://api.openai.com.
**Basic Features**
- [x] Support forwarding all OpenAI APIs.
- [x] Support streaming responses.
- [x] Support specifying forwarding route prefixes.
- [x] Docker deployment.
- [x] Pip installation deployment.

**Advanced Features**
- [x] Real-time recording of chat logs (including chat content from streaming responses).
- [x] Support default OpenAI API key (round-robin invocation of multiple API keys).
- [x] Custom forward API key instead of OpenAI API key (see advanced configuration).
- [x] Support request IP verification (IP whitelist and blacklist).


# Usage

Expand Down
2 changes: 1 addition & 1 deletion docker/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
FROM continuumio/miniconda3:master-alpine
FROM python:3.10-alpine
LABEL maintainer="kunyuan"
ENV LC_ALL=C.UTF-8
ENV LANG=C.UTF-8
Expand Down
2 changes: 1 addition & 1 deletion openai_forward/__init__.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
__version__ = "0.1.8"
__version__ = "0.1.9"

from dotenv import load_dotenv

Expand Down
22 changes: 13 additions & 9 deletions openai_forward/__main__.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ class Cli:
def run(port=8000,
workers=1,
api_key=None,
forward_key=None,
base_url=None,
log_chat=None,
route_prefix=None,
Expand All @@ -20,20 +21,23 @@ def run(port=8000,
----------
port: int, default 8000
workers: int, default 1
api_key: str, default None
base_url: str, default None
log_chat: str, default None
route_prefix: str, default None
ip_whitelist: str, default None
ip_blacklist: str, default None
workers: int, 1
api_key: str, None
forward_key: str, None
base_url: str, None
log_chat: str, None
route_prefix: str, None
ip_whitelist: str, None
ip_blacklist: str, None
"""
if base_url:
os.environ['OPENAI_BASE_URL'] = base_url
if log_chat:
os.environ['LOG_CHAT'] = log_chat
if api_key:
os.environ['OPENAI_API_KEY'] = api_key
if forward_key:
os.environ['FORWARD_KEY'] = forward_key
if log_chat:
os.environ['LOG_CHAT'] = log_chat
if route_prefix:
os.environ['ROUTE_PREFIX'] = route_prefix
if ip_whitelist:
Expand Down
7 changes: 6 additions & 1 deletion openai_forward/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,20 @@
from .routers.openai_v1 import router as router_v1
from sparrow.api import create_app
import httpx
import chardet

app = create_app(title="openai_forward", version="1.0")
openai = Openai()
use_http2 = False


def autodetect(content):
return chardet.detect(content).get("encoding")


@app.on_event('startup')
async def startup_event():
app.state.client = httpx.AsyncClient(base_url=Openai.BASE_URL, http2=use_http2)
app.state.client = httpx.AsyncClient(base_url=Openai.BASE_URL, http2=use_http2, default_encoding=autodetect)


@app.on_event('shutdown')
Expand Down
33 changes: 14 additions & 19 deletions openai_forward/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,8 @@
from starlette.background import BackgroundTask
import os
from itertools import cycle
from .content.chat import parse_chat_completions, ChatSaver
from .config import env2list
from .content.chat import ChatSaver
from .config import env2list, print_startup_info


class OpenaiBase:
Expand All @@ -27,6 +27,7 @@ class OpenaiBase:
ROUTE_PREFIX = '/' + ROUTE_PREFIX
timeout = 30
chatsaver = ChatSaver(save_interval=10)
print_startup_info(BASE_URL, ROUTE_PREFIX, _openai_api_key_list, _FWD_KEYS, _LOG_CHAT)

def validate_request_host(self, ip):
if self.IP_WHITELIST and ip not in self.IP_WHITELIST:
Expand All @@ -37,18 +38,14 @@ def validate_request_host(self, ip):
detail=f"Forbidden, ip={ip} in blacklist!")

@classmethod
def log_chat_completions(cls, bytes_: bytes):
target_info = parse_chat_completions(bytes_)
cls.chatsaver.add_chat({target_info['role']: target_info['content']})

@classmethod
async def aiter_bytes(cls, r: httpx.Response):
async def aiter_bytes(cls, r: httpx.Response, route_path: str):
bytes_ = b''
async for chunk in r.aiter_bytes():
bytes_ += chunk
yield chunk
try:
cls.log_chat_completions(bytes_)
target_info = cls.chatsaver.parse_bytes_to_content(bytes_, route_path)
cls.chatsaver.add_chat({target_info['role']: target_info['content']})
except Exception as e:
logger.debug(f"log chat (not) error:\n{e=}")

Expand Down Expand Up @@ -78,17 +75,15 @@ async def _reverse_proxy(cls, request: Request):
else:
tmp_headers = {}

if cls._LOG_CHAT:
log_chat_completions = False
if cls._LOG_CHAT and request.method == 'POST':
try:
input_info = await request.json()
msgs = input_info['messages']
cls.chatsaver.add_chat({
"host": request.client.host,
"model": input_info['model'],
"messages": [{msg['role']: msg['content']} for msg in msgs],
})
chat_info = await cls.chatsaver.parse_payload_to_content(request, route_path=url_path)
if chat_info:
cls.chatsaver.add_chat(chat_info)
log_chat_completions = True
except Exception as e:
logger.debug(f"log chat (not) error:\n{request.client.host=}: {e}")
logger.debug(f"log chat error:\n{request.client.host=} {request.method=}: {e}")

tmp_headers.update({"Content-Type": "application/json"})
req = client.build_request(
Expand All @@ -108,7 +103,7 @@ async def _reverse_proxy(cls, request: Request):
logger.error(error_info)
raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail=error_info)

aiter_bytes = cls.aiter_bytes(r) if cls._LOG_CHAT else r.aiter_bytes()
aiter_bytes = cls.aiter_bytes(r, url_path) if log_chat_completions else r.aiter_bytes()
return StreamingResponse(
aiter_bytes,
status_code=r.status_code,
Expand Down
24 changes: 23 additions & 1 deletion openai_forward/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,29 @@
import logging
import os
import time
from rich import print
from rich.panel import Panel
from rich.table import Table


def print_startup_info(base_url, route_prefix, api_key, forward_key, log_chat):
try:
from dotenv import load_dotenv
load_dotenv('.env')
except Exception:
...
route_prefix = route_prefix or "/"
api_key_info = True if len(api_key) else False
forward_key_info = True if len(forward_key) else False
table = Table(title="", box=None, width=100)
table.add_column("base-url", justify="left", style="#df412f")
table.add_column("route-prefix", justify="center", style="#df412f")
table.add_column("openai-api-key", justify="center", style="green")
table.add_column("forward-key", justify="center", style="green")
table.add_column("Log-chat", justify="center", style="green")
table.add_column("Log-dir", justify="center", style="#f5bb00")
table.add_row(base_url, route_prefix, str(api_key_info), str(forward_key_info), str(log_chat), "./Log/*.log")
print(Panel(table, title="🤗openai-forward is ready to serve!", expand=False))


class InterceptHandler(logging.Handler):
Expand All @@ -29,7 +52,6 @@ def setting_log(log_name, multi_process=True):
if os.environ.get("TZ") == "Asia/Shanghai":
os.environ['TZ'] = "UTC-8"
if hasattr(time, 'tzset'):
print(os.environ['TZ'])
time.tzset()

logging.root.handlers = [InterceptHandler()]
Expand Down
22 changes: 22 additions & 0 deletions openai_forward/content/chat.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
from orjson import JSONDecodeError
from loguru import logger
from httpx._decoders import LineDecoder
from fastapi import Request
from pathlib import Path
from sparrow import relp
from typing import List, Dict
Expand Down Expand Up @@ -66,6 +67,27 @@ def _init_chat_file(self):
while Path(self.chat_file).exists():
self._file_idx += 1

@staticmethod
async def parse_payload_to_content(request: Request, route_path: str):
payload = await request.json()
if route_path == "/v1/chat/completions":
msgs = payload['messages']
model = payload['model']
return {
"host": request.client.host,
"model": model,
"messages": [{msg['role']: msg['content']} for msg in msgs],
}
else:
return {}

@staticmethod
def parse_bytes_to_content(bytes_: bytes, route_path: str):
if route_path == "/v1/chat/completions":
return parse_chat_completions(bytes_)
else:
return {}

def add_chat(self, chat_info: dict):
logger.info(str(chat_info))
self._chat_list.append(chat_info)
Expand Down
5 changes: 3 additions & 2 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -19,13 +19,14 @@ classifiers = [

dependencies = [
"loguru",
"sparrow-python>=0.1.2",
"sparrow-python>=0.1.3",
"fastapi",
"uvicorn",
"orjson",
"python-dotenv",
"httpx",
"pytz"
"pytz",
"chardet",
]

dynamic = ["version"]
Expand Down

0 comments on commit 6d5c184

Please sign in to comment.