Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Usage examples. #26

Open
murrellb opened this issue Dec 30, 2024 · 0 comments
Open

Usage examples. #26

murrellb opened this issue Dec 30, 2024 · 0 comments

Comments

@murrellb
Copy link
Member

Checking equivalence to HuggingFace transformers:

using HuggingFaceTransformers, Jjama3, JSON3

path = "/localpathto/Qwen2.5-Coder-0.5B-Instruct"

config = JSON3.read(read(path*"/config.json", String));
Jmodel = load_llama3_from_safetensors(path*"/model.safetensors", config);

model = from_pretrained(CausalLM, path);
tokenizer = from_pretrained(AutoTokenizer, path)
prompt = "Explain, in as much detail as possible, the Julia ecosystem for deep learning."

messages = [
    Dict("role" => "system", "content" => "You are Bob, a helpful assistant."),
    Dict("role" => "user", "content" => prompt)
]

toks = apply_chat_template(
    tokenizer,
    messages,
    add_generation_prompt=true
)

gen = HuggingFaceTransformers.generate(
    model,
    toks,
    max_length=100,
    do_sample=false,
    repetition_penalty=1.0 #Very important for equivalent sampling, because Jjama3 doesn't have this yet.
);

jjamagen = Jjama3.generate(Jmodel, toks .+ 1,
        max_new_tokens=100-length(toks),
        end_token = HuggingFaceTransformers.encode(tokenizer, "<|im_end|>")[1]+1);

println(HuggingFaceTransformers.decode(tokenizer, gen));
println(HuggingFaceTransformers.decode(tokenizer, jjamagen .- 1));

(gen .+ 1) == jjamagen
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant