Replies: 3 comments 4 replies
-
Hi @JanMP , I'm not familiar with the Ollama.ai class, but if it installs models in a way that is compatible with the hugging face Transformers, it should be pretty easy to load them locally using the Transformers LLM class in Guidance (https://github.com/guidance-ai/guidance/blob/main/guidance/llms/_transformers.py). There are several subclasses in https://github.com/guidance-ai/guidance/tree/main/guidance/llms/transformers that demonstrate how to add a particular tokenizer or role model syntax to support model-specific needs. This might be a route to adding support for Ollama's Modelfiles. It would need investigation but would be a great contribution if it's something you are interested in looking into. |
Beta Was this translation helpful? Give feedback.
-
Did you manage to get it working ? I would be very interested in such implementation. Thank you in advance !! |
Beta Was this translation helpful? Give feedback.
-
I downloaded the ollama binary for my linux machine and then ran this:
Then, I can use code like this: from guidance import models, gen
path = "/home/xrdawson/.ollama/models/blobs/sha256:00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29"
llama3 = models.LlamaCpp(path, echo=False)
lm = llama3 + f'Tell me a story about a neanderthal? ' + gen(stop='.', name="test")
print(lm["test"]) Which outputs:
|
Beta Was this translation helpful? Give feedback.
-
Is there a way to use llms locally installed via Ollama.ai?
Beta Was this translation helpful? Give feedback.
All reactions