You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Given that we have only Llama 3 70B and 8B, it would be useful to have a Tiny Llama based on the Llama 3 tokenizer so that we can use it as a drafting model for speculative decoding.
Are there plans to create a Llama 3 version?
The text was updated successfully, but these errors were encountered:
On Wed, May 22, 2024 at 11:01 PM cduk ***@***.***> wrote:
Given that we have only Llama 3 70B and 8B, it would be useful to have a
Tiny Llama based on the Llama 3 tokenizer so that we can use it as a
drafting model for speculative decoding.
Are there plans to create a Llama 3 version?
—
Reply to this email directly, view it on GitHub
<#186>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ASVG6CXCKDERGD5SJZALQQLZDUISXAVCNFSM6AAAAABIEO26WKVHI2DSMVQWIX3LMV43ASLTON2WKOZSGMYTCNJWHE3TONY>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
Given that we have only Llama 3 70B and 8B, it would be useful to have a Tiny Llama based on the Llama 3 tokenizer so that we can use it as a drafting model for speculative decoding.
Are there plans to create a Llama 3 version?
The text was updated successfully, but these errors were encountered: