You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
However, ollama only supports local image paths or Base64 encoded images and seems to break unless queried like so:
messages=[
{
'role': 'user',
'content': 'Whats in this image?',
'images': [path],
}
],
There's PR 5208 merged in ollama, which should resolve the issue of content being an array instead of a string. However, PR 6680 is currently open for LiteLLM to fix the exact unmarshalling error referenced in #10, so it could be the way they are querying ollama. If this gets merged, we might not need to do anything. Otherwise, we could implement basically the same fix on our end (i.e. flattening the content array, adding an images key, and potentially throwing a more explicit error for web images).
The text was updated successfully, but these errors were encountered:
We currently support the OpenAI Vision API, in which messages look like this:
However, ollama only supports local image paths or Base64 encoded images and seems to break unless queried like so:
There's PR 5208 merged in ollama, which should resolve the issue of
content
being an array instead of a string. However, PR 6680 is currently open for LiteLLM to fix the exact unmarshalling error referenced in #10, so it could be the way they are querying ollama. If this gets merged, we might not need to do anything. Otherwise, we could implement basically the same fix on our end (i.e. flattening thecontent
array, adding animages
key, and potentially throwing a more explicit error for web images).The text was updated successfully, but these errors were encountered: