You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thanks for sharing this work, I was trying to revive your result, started from your usage demo. However , I found candidate response are all something like this :
I am thinking about it might because of the token_decoder you're using, the model;s output is 256 , but the ASCII vocab has only 127 words. I am not sure what kind of decoder I should use , when I tried to use a model which has 128 num_tokens , the SFT process because wrong
could you please help me out about it?
The text was updated successfully, but these errors were encountered:
Hi, thanks for sharing this work, I was trying to revive your result, started from your usage demo. However , I found candidate response are all something like this :
I am thinking about it might because of the token_decoder you're using, the model;s output is 256 , but the ASCII vocab has only 127 words. I am not sure what kind of decoder I should use , when I tried to use a model which has 128 num_tokens , the SFT process because wrong
could you please help me out about it?
The text was updated successfully, but these errors were encountered: