Skip to content

Commit

Permalink
Merge pull request #163 from jhc13/llava-next-prompts
Browse files Browse the repository at this point in the history
Fix LLaVA-NeXT Vicuna prompt postprocessing
  • Loading branch information
jhc13 authored Jun 4, 2024
2 parents 5edb6f1 + e6d5d55 commit 204a58e
Showing 1 changed file with 3 additions and 2 deletions.
5 changes: 3 additions & 2 deletions taggui/auto_captioning/prompts.py
Original file line number Diff line number Diff line change
Expand Up @@ -61,9 +61,10 @@ def postprocess_prompt_and_generated_text(model_type: CaptionModelType,
generated_text)
prompt = prompt.replace('<grounding>', '')
elif model_type in (CaptionModelType.LLAVA_1_5,
CaptionModelType.LLAVA_NEXT_MISTRAL,
CaptionModelType.LLAVA_NEXT_VICUNA):
CaptionModelType.LLAVA_NEXT_MISTRAL):
prompt = prompt.replace('<image>', ' ')
elif model_type == CaptionModelType.LLAVA_NEXT_VICUNA:
prompt = prompt.replace('<image>', '')
elif model_type == CaptionModelType.LLAVA_LLAMA_3:
prompt = prompt.replace('<|start_header_id|>', '')
prompt = prompt.replace('<|end_header_id|>', '')
Expand Down

0 comments on commit 204a58e

Please sign in to comment.