Skip to content
This repository has been archived by the owner on Feb 12, 2024. It is now read-only.

Commit

Permalink
fix batch inference
Browse files Browse the repository at this point in the history
  • Loading branch information
emrgnt-cmplxty committed Sep 21, 2023
1 parent cedbb27 commit b431117
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion sciphi/llm/vllm_llm.py
Original file line number Diff line number Diff line change
Expand Up @@ -59,5 +59,5 @@ def get_instruct_completion(self, prompt: str) -> str:

def get_batch_instruct_completion(self, prompts: list[str]) -> list[str]:
"""Get batch instruction completion from local vLLM."""
raw_outputs = self.model.generate([prompts], self.sampling_params)
raw_outputs = self.model.generate(prompts, self.sampling_params)
return [ele.outputs[0].text for ele in raw_outputs]

0 comments on commit b431117

Please sign in to comment.