You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I want to run Llama.cpp in chat completion mode, which reads these .md files and performs a single chat completion for each.
The chat completion output should be written to a file.
It seems to work well if I paste the content of an .md file directly into the chat mode, but I'm having trouble using the -f flag to read the prompt from a file. The -f command replaces the entire prompt and doesn't stop properly.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hey, I'm trying to run a local LLM pipeline:
It seems to work well if I paste the content of an .md file directly into the chat mode, but I'm having trouble using the
-f
flag to read the prompt from a file. The-f
command replaces the entire prompt and doesn't stop properly.Any help is appreciated!
Beta Was this translation helpful? Give feedback.
All reactions