We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Great approach and super helpful!
This is not an issue per-se, but is more a curiosity of how to best approach larger models.
TinyLlama is great for running locally because its so lightweight, what would you suggest if we wanted to run a larger model (e.g. Llama3.1 8B o 70B)?
Any insight would be super helpful!
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Great approach and super helpful!
This is not an issue per-se, but is more a curiosity of how to best approach larger models.
TinyLlama is great for running locally because its so lightweight, what would you suggest if we wanted to run a larger model (e.g. Llama3.1 8B o 70B)?
Any insight would be super helpful!
The text was updated successfully, but these errors were encountered: