Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: refactor the beichuan model #7953

Merged
merged 8 commits into from
Sep 4, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 0 additions & 8 deletions api/core/model_runtime/model_providers/baichuan/baichuan.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -27,11 +27,3 @@ provider_credential_schema:
placeholder:
zh_Hans: 在此输入您的 API Key
en_US: Enter your API Key
- variable: secret_key
label:
en_US: Secret Key
type: secret-input
required: false
placeholder:
zh_Hans: 在此输入您的 Secret Key
en_US: Enter your Secret Key
Original file line number Diff line number Diff line change
Expand Up @@ -43,3 +43,4 @@ parameter_rules:
zh_Hans: 允许模型自行进行外部搜索,以增强生成结果。
en_US: Allow the model to perform external search to enhance the generation results.
required: false
deprecated: true
Original file line number Diff line number Diff line change
Expand Up @@ -43,3 +43,4 @@ parameter_rules:
zh_Hans: 允许模型自行进行外部搜索,以增强生成结果。
en_US: Allow the model to perform external search to enhance the generation results.
required: false
deprecated: true
Original file line number Diff line number Diff line change
Expand Up @@ -4,36 +4,32 @@ label:
model_type: llm
features:
- agent-thought
- multi-tool-call
model_properties:
mode: chat
context_size: 32000
parameter_rules:
- name: temperature
use_template: temperature
default: 0.3
- name: top_p
use_template: top_p
default: 0.85
- name: top_k
label:
zh_Hans: 取样数量
en_US: Top k
type: int
min: 0
max: 20
default: 5
help:
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
en_US: Only sample from the top K options for each subsequent token.
required: false
- name: max_tokens
use_template: max_tokens
required: true
default: 8000
min: 1
max: 192000
- name: presence_penalty
use_template: presence_penalty
- name: frequency_penalty
use_template: frequency_penalty
default: 1
min: 1
max: 2
default: 2048
- name: with_search_enhance
label:
zh_Hans: 搜索增强
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,36 +4,44 @@ label:
model_type: llm
features:
- agent-thought
- multi-tool-call
model_properties:
mode: chat
context_size: 128000
parameter_rules:
- name: temperature
use_template: temperature
default: 0.3
- name: top_p
use_template: top_p
default: 0.85
- name: top_k
label:
zh_Hans: 取样数量
en_US: Top k
type: int
min: 0
max: 20
default: 5
help:
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
en_US: Only sample from the top K options for each subsequent token.
required: false
- name: max_tokens
use_template: max_tokens
required: true
default: 8000
min: 1
max: 128000
- name: presence_penalty
use_template: presence_penalty
- name: frequency_penalty
use_template: frequency_penalty
default: 1
min: 1
max: 2
default: 2048
- name: res_format
label:
zh_Hans: 回复格式
en_US: response format
type: string
help:
zh_Hans: 指定模型必须输出的格式
en_US: specifying the format that the model must output
required: false
options:
- text
- json_object
- name: with_search_enhance
label:
zh_Hans: 搜索增强
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,36 +4,44 @@ label:
model_type: llm
features:
- agent-thought
- multi-tool-call
model_properties:
mode: chat
context_size: 32000
parameter_rules:
- name: temperature
use_template: temperature
default: 0.3
- name: top_p
use_template: top_p
default: 0.85
- name: top_k
label:
zh_Hans: 取样数量
en_US: Top k
type: int
min: 0
max: 20
default: 5
help:
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
en_US: Only sample from the top K options for each subsequent token.
required: false
- name: max_tokens
use_template: max_tokens
required: true
default: 8000
min: 1
max: 32000
- name: presence_penalty
use_template: presence_penalty
- name: frequency_penalty
use_template: frequency_penalty
default: 1
min: 1
max: 2
default: 2048
- name: res_format
label:
zh_Hans: 回复格式
en_US: response format
type: string
help:
zh_Hans: 指定模型必须输出的格式
en_US: specifying the format that the model must output
required: false
options:
- text
- json_object
- name: with_search_enhance
label:
zh_Hans: 搜索增强
Expand Down
30 changes: 19 additions & 11 deletions api/core/model_runtime/model_providers/baichuan/llm/baichuan4.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,36 +4,44 @@ label:
model_type: llm
features:
- agent-thought
- multi-tool-call
model_properties:
mode: chat
context_size: 32000
parameter_rules:
- name: temperature
use_template: temperature
default: 0.3
- name: top_p
use_template: top_p
default: 0.85
- name: top_k
label:
zh_Hans: 取样数量
en_US: Top k
type: int
min: 0
max: 20
default: 5
help:
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
en_US: Only sample from the top K options for each subsequent token.
required: false
- name: max_tokens
use_template: max_tokens
required: true
default: 8000
min: 1
max: 32000
- name: presence_penalty
use_template: presence_penalty
- name: frequency_penalty
use_template: frequency_penalty
default: 1
min: 1
max: 2
default: 2048
- name: res_format
label:
zh_Hans: 回复格式
en_US: response format
type: string
help:
zh_Hans: 指定模型必须输出的格式
en_US: specifying the format that the model must output
required: false
options:
- text
- json_object
- name: with_search_enhance
label:
zh_Hans: 搜索增强
Expand Down
Loading