You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It happens because it calls JiantMLMHeadFactory with some arguments while the initializer doesn't accept any argument. A workaround is to create an object by adding the following lines:
if head_class ==JiantMLMHeadFactory:
head_class = head_class()
After this fix, the following line throws an exception:
to provent hidden_dropout_prob from being passed with **kwargs, which leads to an error.
Also, as of Transformers v4.5, transformers.models.bert.modeling_bert.BertLayerNorm and transformers.models.bert.modeling_bert.gelu are not longer supported. My correction is to change the former into torch.nn.LayerNorm and the latter into x = transformers.models.bert.modeling_bert.gelu(x).
MLM task
To Reproduce
jiant
you're using: 2.2.0jiant
, e.g, "Macbook CPU"Expected behavior
It should create a model and start the training.
Screenshots
It throws the following exception when it tries to create MLM head.
Additional context
The issue happens in the following line:
jiant/jiant/proj/main/modeling/heads.py
Line 70 in 310f22b
It happens because it calls JiantMLMHeadFactory with some arguments while the initializer doesn't accept any argument. A workaround is to create an object by adding the following lines:
After this fix, the following line throws an exception:
jiant/jiant/proj/main/modeling/heads.py
Line 208 in 310f22b
It can be fixed by changing it to:
Then, the next issue is for the following line:
jiant/jiant/proj/main/modeling/taskmodels.py
Line 283 in 310f22b
and it can be fixed by changing the line to:
input_ids=masked_batch.masked_input_ids,
The text was updated successfully, but these errors were encountered: