You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently I am testing with captured blogs with a lot of lines and paragraphs to see how they summarize. However most of the time Ollama (with Qwen for large context window and small footprint) they only do one-line summaries. Even asking question with page context does not yield results of the same length, often Q&A leads to multi-line answers.
Are there ways to set the maximum or ideal length of the output? Alternatively, are there ways to "chunk" the page into individual chunks for Ollama to treat as individual inputs (semantic chunking)?
The text was updated successfully, but these errors were encountered:
Currently I am testing with captured blogs with a lot of lines and paragraphs to see how they summarize. However most of the time Ollama (with Qwen for large context window and small footprint) they only do one-line summaries. Even asking question with page context does not yield results of the same length, often Q&A leads to multi-line answers.
Are there ways to set the maximum or ideal length of the output? Alternatively, are there ways to "chunk" the page into individual chunks for Ollama to treat as individual inputs (semantic chunking)?
The text was updated successfully, but these errors were encountered: