You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
您在文中《Align before Fuse: Vision and Language Representation Learning with Momentum Distillation》提到了对图像文本对的可视化,展示了ALBEF模型根据图片输出文本,这说明ALBEF具有这项能力,但ALBEF只在model_vqa.py中有decoder,想知道您是如何生成文本的?
我借鉴了BLIP文章中的文本生成方式,使用huggingface中transformers库中的BERT预训练模型作为text_decoder,代码如下所示,但生成的结果很奇怪,总是相同的几个单词,并且虽然loss已经降低了,但生成出的文本的效果依然很差。
sung shan shan gang gang gang gang gang gang
and and.......
a truck drives on the road past a utility pole and grassy hill
a snowboarder flies through the air while holding their board with one hand
希望您能告诉我正确的生成文本的方法。
-----------translation--------------
In your paper ‘Align before Fuse: Vision and Language Representation Learning with Momentum Distillation’ you mention visualisation of image-text pairs, showing the ALBEF model outputting text based on images, which suggests that ALBEF has this This suggests that ALBEF has this capability, but ALBEF only has a decoder in model_vqa.py, and I'd like to know how you generate the text?
I borrowed the text generation method from BLIP article, and used the BERT pre-trained model in the transformers library in huggingface as text_decoder, the code is shown as below, but the generated result is very strange, it is always the same words, and although the loss has been lowered, the effect of the generated text is still very poor.
I hope you can tell me the correct way to generate the text.
The text was updated successfully, but these errors were encountered:
您在文中《Align before Fuse: Vision and Language Representation Learning with Momentum Distillation》提到了对图像文本对的可视化,展示了ALBEF模型根据图片输出文本,这说明ALBEF具有这项能力,但ALBEF只在model_vqa.py中有decoder,想知道您是如何生成文本的?
我借鉴了BLIP文章中的文本生成方式,使用huggingface中transformers库中的BERT预训练模型作为text_decoder,代码如下所示,但生成的结果很奇怪,总是相同的几个单词,并且虽然loss已经降低了,但生成出的文本的效果依然很差。
-----------code--------------
-----------result--------------
希望您能告诉我正确的生成文本的方法。
-----------translation--------------
In your paper ‘Align before Fuse: Vision and Language Representation Learning with Momentum Distillation’ you mention visualisation of image-text pairs, showing the ALBEF model outputting text based on images, which suggests that ALBEF has this This suggests that ALBEF has this capability, but ALBEF only has a decoder in model_vqa.py, and I'd like to know how you generate the text?
I borrowed the text generation method from BLIP article, and used the BERT pre-trained model in the transformers library in huggingface as text_decoder, the code is shown as below, but the generated result is very strange, it is always the same words, and although the loss has been lowered, the effect of the generated text is still very poor.
I hope you can tell me the correct way to generate the text.
The text was updated successfully, but these errors were encountered: