Skip to content

Commit

Permalink
chore: refactor the beichuan model (langgenius#7953)
Browse files Browse the repository at this point in the history
  • Loading branch information
hjlarry authored and ProseGuys committed Sep 5, 2024
1 parent 83e8486 commit be27106
Show file tree
Hide file tree
Showing 9 changed files with 337 additions and 320 deletions.
8 changes: 0 additions & 8 deletions api/core/model_runtime/model_providers/baichuan/baichuan.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -27,11 +27,3 @@ provider_credential_schema:
placeholder:
zh_Hans: 在此输入您的 API Key
en_US: Enter your API Key
- variable: secret_key
label:
en_US: Secret Key
type: secret-input
required: false
placeholder:
zh_Hans: 在此输入您的 Secret Key
en_US: Enter your Secret Key
Original file line number Diff line number Diff line change
Expand Up @@ -43,3 +43,4 @@ parameter_rules:
zh_Hans: 允许模型自行进行外部搜索,以增强生成结果。
en_US: Allow the model to perform external search to enhance the generation results.
required: false
deprecated: true
Original file line number Diff line number Diff line change
Expand Up @@ -43,3 +43,4 @@ parameter_rules:
zh_Hans: 允许模型自行进行外部搜索,以增强生成结果。
en_US: Allow the model to perform external search to enhance the generation results.
required: false
deprecated: true
Original file line number Diff line number Diff line change
Expand Up @@ -4,36 +4,32 @@ label:
model_type: llm
features:
- agent-thought
- multi-tool-call
model_properties:
mode: chat
context_size: 32000
parameter_rules:
- name: temperature
use_template: temperature
default: 0.3
- name: top_p
use_template: top_p
default: 0.85
- name: top_k
label:
zh_Hans: 取样数量
en_US: Top k
type: int
min: 0
max: 20
default: 5
help:
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
en_US: Only sample from the top K options for each subsequent token.
required: false
- name: max_tokens
use_template: max_tokens
required: true
default: 8000
min: 1
max: 192000
- name: presence_penalty
use_template: presence_penalty
- name: frequency_penalty
use_template: frequency_penalty
default: 1
min: 1
max: 2
default: 2048
- name: with_search_enhance
label:
zh_Hans: 搜索增强
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,36 +4,44 @@ label:
model_type: llm
features:
- agent-thought
- multi-tool-call
model_properties:
mode: chat
context_size: 128000
parameter_rules:
- name: temperature
use_template: temperature
default: 0.3
- name: top_p
use_template: top_p
default: 0.85
- name: top_k
label:
zh_Hans: 取样数量
en_US: Top k
type: int
min: 0
max: 20
default: 5
help:
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
en_US: Only sample from the top K options for each subsequent token.
required: false
- name: max_tokens
use_template: max_tokens
required: true
default: 8000
min: 1
max: 128000
- name: presence_penalty
use_template: presence_penalty
- name: frequency_penalty
use_template: frequency_penalty
default: 1
min: 1
max: 2
default: 2048
- name: res_format
label:
zh_Hans: 回复格式
en_US: response format
type: string
help:
zh_Hans: 指定模型必须输出的格式
en_US: specifying the format that the model must output
required: false
options:
- text
- json_object
- name: with_search_enhance
label:
zh_Hans: 搜索增强
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,36 +4,44 @@ label:
model_type: llm
features:
- agent-thought
- multi-tool-call
model_properties:
mode: chat
context_size: 32000
parameter_rules:
- name: temperature
use_template: temperature
default: 0.3
- name: top_p
use_template: top_p
default: 0.85
- name: top_k
label:
zh_Hans: 取样数量
en_US: Top k
type: int
min: 0
max: 20
default: 5
help:
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
en_US: Only sample from the top K options for each subsequent token.
required: false
- name: max_tokens
use_template: max_tokens
required: true
default: 8000
min: 1
max: 32000
- name: presence_penalty
use_template: presence_penalty
- name: frequency_penalty
use_template: frequency_penalty
default: 1
min: 1
max: 2
default: 2048
- name: res_format
label:
zh_Hans: 回复格式
en_US: response format
type: string
help:
zh_Hans: 指定模型必须输出的格式
en_US: specifying the format that the model must output
required: false
options:
- text
- json_object
- name: with_search_enhance
label:
zh_Hans: 搜索增强
Expand Down
30 changes: 19 additions & 11 deletions api/core/model_runtime/model_providers/baichuan/llm/baichuan4.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,36 +4,44 @@ label:
model_type: llm
features:
- agent-thought
- multi-tool-call
model_properties:
mode: chat
context_size: 32000
parameter_rules:
- name: temperature
use_template: temperature
default: 0.3
- name: top_p
use_template: top_p
default: 0.85
- name: top_k
label:
zh_Hans: 取样数量
en_US: Top k
type: int
min: 0
max: 20
default: 5
help:
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
en_US: Only sample from the top K options for each subsequent token.
required: false
- name: max_tokens
use_template: max_tokens
required: true
default: 8000
min: 1
max: 32000
- name: presence_penalty
use_template: presence_penalty
- name: frequency_penalty
use_template: frequency_penalty
default: 1
min: 1
max: 2
default: 2048
- name: res_format
label:
zh_Hans: 回复格式
en_US: response format
type: string
help:
zh_Hans: 指定模型必须输出的格式
en_US: specifying the format that the model must output
required: false
options:
- text
- json_object
- name: with_search_enhance
label:
zh_Hans: 搜索增强
Expand Down
Loading

0 comments on commit be27106

Please sign in to comment.