Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add python script to fetch togetherai models #97

Merged
merged 1 commit into from
Aug 8, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
157 changes: 128 additions & 29 deletions pages/docs/configuration/librechat_yaml/ai_endpoints/togetherai.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -11,59 +11,158 @@ description: Example configuration for together.ai

- **Known:** icon provided.
- **Known issue:** fetching list of models is not supported.
- The example includes a model list, which was last updated on February 27, 2024, for your convenience.
- The example includes a model list, which was last updated on August 1, 2024, for your convenience.

<Callout type="tip" title="Fetch and order the models" collapsible>
This python script can fetch and order the llm models for you. The output will be saved in models.txt, formated in a way that should make it easier for you to include in the yaml config.

```py filename="fetch_togetherai.py"
import json

import requests

# API key
api_key = ""

# API endpoint
url = "https://api.together.xyz/v1/models"

# headers
headers = {
"accept": "application/json",
"Authorization": f"Bearer {api_key}"
}

# make request
response = requests.get(url, headers=headers)

# parse JSON response
data = response.json()

# extract an ordered list of unique model IDs
model_ids = sorted(
[
model['id']
for model in data
if model['type'] == 'chat'
]
)

# write result to a text file
with open("models_togetherai.json", "w") as file:
json.dump(model_ids, file, indent=2)
```
</Callout>

```yaml
- name: "together.ai"
apiKey: "${TOGETHERAI_API_KEY}"
baseURL: "https://api.together.xyz"
models:
default: [
"zero-one-ai/Yi-34B-Chat",
"Austism/chronos-hermes-13b",
"DiscoResearch/DiscoLM-mixtral-8x7b-v2",
"Gryphe/MythoMax-L2-13b",
"lmsys/vicuna-13b-v1.5",
"lmsys/vicuna-7b-v1.5",
"lmsys/vicuna-13b-v1.5-16k",
"codellama/CodeLlama-13b-Instruct-hf",
"codellama/CodeLlama-34b-Instruct-hf",
"codellama/CodeLlama-70b-Instruct-hf",
"codellama/CodeLlama-7b-Instruct-hf",
"togethercomputer/llama-2-13b-chat",
"togethercomputer/llama-2-70b-chat",
"togethercomputer/llama-2-7b-chat",
"HuggingFaceH4/zephyr-7b-beta",
"NousResearch/Hermes-2-Theta-Llama-3-70B",
"NousResearch/Nous-Capybara-7B-V1p9",
"NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO",
"NousResearch/Nous-Hermes-2-Mixtral-8x7B-SFT",
"NousResearch/Nous-Hermes-2-Yi-34B",
"NousResearch/Nous-Hermes-Llama2-13b",
"NousResearch/Nous-Hermes-Llama2-70b",
"NousResearch/Nous-Hermes-llama-2-7b",
"NousResearch/Nous-Hermes-Llama2-13b",
"NousResearch/Nous-Hermes-2-Yi-34B",
"openchat/openchat-3.5-1210",
"Open-Orca/Mistral-7B-OpenOrca",
"togethercomputer/Qwen-7B-Chat",
"snorkelai/Snorkel-Mistral-PairRM-DPO",
"togethercomputer/alpaca-7b",
"togethercomputer/falcon-40b-instruct",
"togethercomputer/falcon-7b-instruct",
"togethercomputer/GPT-NeoXT-Chat-Base-20B",
"togethercomputer/Llama-2-7B-32K-Instruct",
"togethercomputer/Pythia-Chat-Base-7B-v0.16",
"togethercomputer/RedPajama-INCITE-Chat-3B-v1",
"togethercomputer/RedPajama-INCITE-7B-Chat",
"togethercomputer/StripedHyena-Nous-7B",
"Qwen/Qwen1.5-0.5B-Chat",
"Qwen/Qwen1.5-1.8B-Chat",
"Qwen/Qwen1.5-110B-Chat",
"Qwen/Qwen1.5-14B-Chat",
"Qwen/Qwen1.5-32B-Chat",
"Qwen/Qwen1.5-4B-Chat",
"Qwen/Qwen1.5-72B-Chat",
"Qwen/Qwen1.5-7B-Chat",
"Qwen/Qwen2-1.5B",
"Qwen/Qwen2-1.5B-Instruct",
"Qwen/Qwen2-72B",
"Qwen/Qwen2-72B-Instruct",
"Qwen/Qwen2-7B",
"Qwen/Qwen2-7B-Instruct",
"Snowflake/snowflake-arctic-instruct",
"Undi95/ReMM-SLERP-L2-13B",
"Undi95/Toppy-M-7B",
"WizardLM/WizardLM-13B-V1.2",
"allenai/OLMo-7B-Instruct",
"carson/ml31405bit",
"carson/ml3170bit",
"carson/ml318bit",
"carson/ml318br",
"codellama/CodeLlama-13b-Instruct-hf",
"codellama/CodeLlama-34b-Instruct-hf",
"codellama/CodeLlama-70b-Instruct-hf",
"codellama/CodeLlama-7b-Instruct-hf",
"cognitivecomputations/dolphin-2.5-mixtral-8x7b",
"databricks/dbrx-instruct",
"deepseek-ai/deepseek-coder-33b-instruct",
"deepseek-ai/deepseek-llm-67b-chat",
"garage-bAInd/Platypus2-70B-instruct",
"google/gemma-2-27b-it",
"google/gemma-2-9b-it",
"google/gemma-2b-it",
"google/gemma-7b-it",
"gradientai/Llama-3-70B-Instruct-Gradient-1048k",
"lmsys/vicuna-13b-v1.3",
"lmsys/vicuna-13b-v1.5",
"lmsys/vicuna-13b-v1.5-16k",
"lmsys/vicuna-7b-v1.3",
"lmsys/vicuna-7b-v1.5",
"meta-llama/Llama-2-13b-chat-hf",
"meta-llama/Llama-2-70b-chat-hf",
"meta-llama/Llama-2-7b-chat-hf",
"meta-llama/Llama-3-70b-chat-hf",
"meta-llama/Llama-3-8b-chat-hf",
"meta-llama/Meta-Llama-3-70B-Instruct",
"meta-llama/Meta-Llama-3-70B-Instruct-Lite",
"meta-llama/Meta-Llama-3-70B-Instruct-Turbo",
"meta-llama/Meta-Llama-3-8B-Instruct",
"meta-llama/Meta-Llama-3-8B-Instruct-Lite",
"meta-llama/Meta-Llama-3-8B-Instruct-Turbo",
"meta-llama/Meta-Llama-3.1-405B-Instruct-Turbo",
"meta-llama/Meta-Llama-3.1-70B-Instruct-Reference",
"meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo",
"meta-llama/Meta-Llama-3.1-70B-Reference",
"meta-llama/Meta-Llama-3.1-8B-Instruct-Reference",
"meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo",
"microsoft/WizardLM-2-8x22B",
"mistralai/Mistral-7B-Instruct-v0.1",
"mistralai/Mistral-7B-Instruct-v0.2",
"mistralai/Mistral-7B-Instruct-v0.3",
"mistralai/Mixtral-8x22B-Instruct-v0.1",
"mistralai/Mixtral-8x7B-Instruct-v0.1",
"openchat/openchat-3.5-1210",
"snorkelai/Snorkel-Mistral-PairRM-DPO",
"teknium/OpenHermes-2-Mistral-7B",
"teknium/OpenHermes-2p5-Mistral-7B",
"upstage/SOLAR-10.7B-Instruct-v1.0"
]
"togethercomputer/CodeLlama-13b-Instruct",
"togethercomputer/CodeLlama-34b-Instruct",
"togethercomputer/CodeLlama-7b-Instruct",
"togethercomputer/Koala-13B",
"togethercomputer/Koala-7B",
"togethercomputer/Llama-2-7B-32K-Instruct",
"togethercomputer/Llama-3-8b-chat-hf-int4",
"togethercomputer/Llama-3-8b-chat-hf-int8",
"togethercomputer/SOLAR-10.7B-Instruct-v1.0-int4",
"togethercomputer/StripedHyena-Nous-7B",
"togethercomputer/alpaca-7b",
"togethercomputer/guanaco-13b",
"togethercomputer/guanaco-33b",
"togethercomputer/guanaco-65b",
"togethercomputer/guanaco-7b",
"togethercomputer/llama-2-13b-chat",
"togethercomputer/llama-2-70b-chat",
"togethercomputer/llama-2-7b-chat",
"upstage/SOLAR-10.7B-Instruct-v1.0",
"zero-one-ai/Yi-34B-Chat"
]
fetch: false # fetching list of models is not supported
titleConvo: true
titleModel: "togethercomputer/llama-2-7b-chat"
Expand Down
Loading