-
Notifications
You must be signed in to change notification settings - Fork 225
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
预训练模型 #5
Comments
直接加载https://storage.googleapis.com/bert_models/2018_11_03/chinese_L-12_H-768_A-12.zip bert中文模型,经过该模型的transformer encoder过程,然后在计算损失的地方做了调整,此处使用交叉熵,以此来fine-tuning |
model.bert.load_state_dict(torch.load(args.init_checkpoint, map_location='cpu')) |
您好,遇到了类似的问题,请问您解决了吗? |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
这里的BERT预训练模型是怎样得到的?
或是
直接用BERT做分类任务,没有根据Masked LM和Next sentence 预训练?
The text was updated successfully, but these errors were encountered: