Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: Error(s) in loading state_dict for VisionTransformer #95

Open
zimoradi76 opened this issue Oct 11, 2024 · 0 comments
Open

Comments

@zimoradi76
Copy link

hello
thanks for implementation sharing
I have an error while trying to train your model.

I used Seg-T-Mask/16 as backbone. and this is my command for training:
python -m segm.train --log-dir seg_tiny_mask --dataset ade20k --backbone vit_tiny_patch16_384 --decoder mask_transformer
this is my error:

python -m segm.train --log-dir seg_tiny_mask --dataset ade20k --backbone vit_tiny_patch16_384 --decoder mask_transformer
/home/moradi/code/segmenter-master/segm/train.py:72: FutureWarning: You are using torch.load with weights_only=False (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only will be flipped to True. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals. We recommend you start setting weights_only=True for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
model = torch.load('/home/moradi/code/segmenter-master/vit_base_patch8_384.pth', map_location='cpu')
Starting process with rank 0...
Process 0 is connected.
All processes are connected.
Use normalization: {'mean': (127.5, 127.5, 127.5), 'std': (127.5, 127.5, 127.5)}
root_dir is here: /home/moradi/data/ADEChallengeData2016
2024-10-11 18:48:27,253 - mmseg - INFO - Loaded 20210 images
Use normalization: {'mean': (127.5, 127.5, 127.5), 'std': (127.5, 127.5, 127.5)}
root_dir is here: /home/moradi/data/ADEChallengeData2016
2024-10-11 18:48:27,308 - mmseg - INFO - Loaded 2000 images
/home/moradi/code/segmenter-master/segm/model/factory.py:84: FutureWarning: You are using torch.load with weights_only=False (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only will be flipped to True. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals. We recommend you start setting weights_only=True for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
state_dict = torch.load(local_path, map_location='cuda')
[rank0]: Traceback (most recent call last):
[rank0]: File "/opt/anaconda3/envs/Project1/lib/python3.9/runpy.py", line 197, in _run_module_as_main
[rank0]: return _run_code(code, main_globals, None,
[rank0]: File "/opt/anaconda3/envs/Project1/lib/python3.9/runpy.py", line 87, in _run_code
[rank0]: exec(code, run_globals)
[rank0]: File "/home/moradi/code/segmenter-master/segm/train.py", line 307, in
[rank0]: main()
[rank0]: File "/opt/anaconda3/envs/Project1/lib/python3.9/site-packages/click/core.py", line 1157, in call
[rank0]: return self.main(*args, **kwargs)
[rank0]: File "/opt/anaconda3/envs/Project1/lib/python3.9/site-packages/click/core.py", line 1078, in main
[rank0]: rv = self.invoke(ctx)
[rank0]: File "/opt/anaconda3/envs/Project1/lib/python3.9/site-packages/click/core.py", line 1434, in invoke
[rank0]: return ctx.invoke(self.callback, **ctx.params)
[rank0]: File "/opt/anaconda3/envs/Project1/lib/python3.9/site-packages/click/core.py", line 783, in invoke
[rank0]: return __callback(*args, **kwargs)
[rank0]: File "/home/moradi/code/segmenter-master/segm/train.py", line 180, in main
[rank0]: model = create_segmenter(net_kwargs)
[rank0]: File "/home/moradi/code/segmenter-master/segm/model/factory.py", line 116, in create_segmenter
[rank0]: encoder = create_vit(model_cfg)
[rank0]: File "/home/moradi/code/segmenter-master/segm/model/factory.py", line 77, in create_vit
[rank0]: load_custom_pretrained_2(model, default_cfg)
[rank0]: File "/home/moradi/code/segmenter-master/segm/model/factory.py", line 86, in load_custom_pretrained_2
[rank0]: model.load_state_dict(filtered_dict, strict=True)
[rank0]: File "/opt/anaconda3/envs/Project1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 2215, in load_state_dict
[rank0]: raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
[rank0]: RuntimeError: Error(s) in loading state_dict for VisionTransformer:
[rank0]: Missing key(s) in state_dict: "cls_token", "pos_embed", "patch_embed.proj.weight", "patch_embed.proj.bias", "blocks.0.norm1.weight", "blocks.0.norm1.bias", "blocks.0.norm2.weight", "blocks.0.norm2.bias", "blocks.0.attn.qkv.weight", "blocks.0.attn.qkv.bias", "blocks.0.attn.proj.weight", "blocks.0.attn.proj.bias", "blocks.0.mlp.fc1.weight", "blocks.0.mlp.fc1.bias", "blocks.0.mlp.fc2.weight", "blocks.0.mlp.fc2.bias", "blocks.1.norm1.weight", "blocks.1.norm1.bias", "blocks.1.norm2.weight", "blocks.1.norm2.bias", "blocks.1.attn.qkv.weight", "blocks.1.attn.qkv.bias", "blocks.1.attn.proj.weight", "blocks.1.attn.proj.bias", "blocks.1.mlp.fc1.weight", "blocks.1.mlp.fc1.bias", "blocks.1.mlp.fc2.weight", "blocks.1.mlp.fc2.bias", "blocks.2.norm1.weight", "blocks.2.norm1.bias", "blocks.2.norm2.weight", "blocks.2.norm2.bias", "blocks.2.attn.qkv.weight", "blocks.2.attn.qkv.bias", "blocks.2.attn.proj.weight", "blocks.2.attn.proj.bias", "blocks.2.mlp.fc1.weight", "blocks.2.mlp.fc1.bias", "blocks.2.mlp.fc2.weight", "blocks.2.mlp.fc2.bias", "blocks.3.norm1.weight", "blocks.3.norm1.bias", "blocks.3.norm2.weight", "blocks.3.norm2.bias", "blocks.3.attn.qkv.weight", "blocks.3.attn.qkv.bias", "blocks.3.attn.proj.weight", "blocks.3.attn.proj.bias", "blocks.3.mlp.fc1.weight", "blocks.3.mlp.fc1.bias", "blocks.3.mlp.fc2.weight", "blocks.3.mlp.fc2.bias", "blocks.4.norm1.weight", "blocks.4.norm1.bias", "blocks.4.norm2.weight", "blocks.4.norm2.bias", "blocks.4.attn.qkv.weight", "blocks.4.attn.qkv.bias", "blocks.4.attn.proj.weight", "blocks.4.attn.proj.bias", "blocks.4.mlp.fc1.weight", "blocks.4.mlp.fc1.bias", "blocks.4.mlp.fc2.weight", "blocks.4.mlp.fc2.bias", "blocks.5.norm1.weight", "blocks.5.norm1.bias", "blocks.5.norm2.weight", "blocks.5.norm2.bias", "blocks.5.attn.qkv.weight", "blocks.5.attn.qkv.bias", "blocks.5.attn.proj.weight", "blocks.5.attn.proj.bias", "blocks.5.mlp.fc1.weight", "blocks.5.mlp.fc1.bias", "blocks.5.mlp.fc2.weight", "blocks.5.mlp.fc2.bias", "blocks.6.norm1.weight", "blocks.6.norm1.bias", "blocks.6.norm2.weight", "blocks.6.norm2.bias", "blocks.6.attn.qkv.weight", "blocks.6.attn.qkv.bias", "blocks.6.attn.proj.weight", "blocks.6.attn.proj.bias", "blocks.6.mlp.fc1.weight", "blocks.6.mlp.fc1.bias", "blocks.6.mlp.fc2.weight", "blocks.6.mlp.fc2.bias", "blocks.7.norm1.weight", "blocks.7.norm1.bias", "blocks.7.norm2.weight", "blocks.7.norm2.bias", "blocks.7.attn.qkv.weight", "blocks.7.attn.qkv.bias", "blocks.7.attn.proj.weight", "blocks.7.attn.proj.bias", "blocks.7.mlp.fc1.weight", "blocks.7.mlp.fc1.bias", "blocks.7.mlp.fc2.weight", "blocks.7.mlp.fc2.bias", "blocks.8.norm1.weight", "blocks.8.norm1.bias", "blocks.8.norm2.weight", "blocks.8.norm2.bias", "blocks.8.attn.qkv.weight", "blocks.8.attn.qkv.bias", "blocks.8.attn.proj.weight", "blocks.8.attn.proj.bias", "blocks.8.mlp.fc1.weight", "blocks.8.mlp.fc1.bias", "blocks.8.mlp.fc2.weight", "blocks.8.mlp.fc2.bias", "blocks.9.norm1.weight", "blocks.9.norm1.bias", "blocks.9.norm2.weight", "blocks.9.norm2.bias", "blocks.9.attn.qkv.weight", "blocks.9.attn.qkv.bias", "blocks.9.attn.proj.weight", "blocks.9.attn.proj.bias", "blocks.9.mlp.fc1.weight", "blocks.9.mlp.fc1.bias", "blocks.9.mlp.fc2.weight", "blocks.9.mlp.fc2.bias", "blocks.10.norm1.weight", "blocks.10.norm1.bias", "blocks.10.norm2.weight", "blocks.10.norm2.bias", "blocks.10.attn.qkv.weight", "blocks.10.attn.qkv.bias", "blocks.10.attn.proj.weight", "blocks.10.attn.proj.bias", "blocks.10.mlp.fc1.weight", "blocks.10.mlp.fc1.bias", "blocks.10.mlp.fc2.weight", "blocks.10.mlp.fc2.bias", "blocks.11.norm1.weight", "blocks.11.norm1.bias", "blocks.11.norm2.weight", "blocks.11.norm2.bias", "blocks.11.attn.qkv.weight", "blocks.11.attn.qkv.bias", "blocks.11.attn.proj.weight", "blocks.11.attn.proj.bias", "blocks.11.mlp.fc1.weight", "blocks.11.mlp.fc1.bias", "blocks.11.mlp.fc2.weight", "blocks.11.mlp.fc2.bias", "norm.weight", "norm.bias", "head.weight", "head.bias".
[rank0]: Unexpected key(s) in state_dict: "encoder.cls_token", "encoder.pos_embed", "encoder.patch_embed.proj.weight", "encoder.patch_embed.proj.bias", "encoder.blocks.0.norm1.weight", "encoder.blocks.0.norm1.bias", "encoder.blocks.0.norm2.weight", "encoder.blocks.0.norm2.bias", "encoder.blocks.0.attn.qkv.weight", "encoder.blocks.0.attn.qkv.bias", "encoder.blocks.0.attn.proj.weight", "encoder.blocks.0.attn.proj.bias", "encoder.blocks.0.mlp.fc1.weight", "encoder.blocks.0.mlp.fc1.bias", "encoder.blocks.0.mlp.fc2.weight", "encoder.blocks.0.mlp.fc2.bias", "encoder.blocks.1.norm1.weight", "encoder.blocks.1.norm1.bias", "encoder.blocks.1.norm2.weight", "encoder.blocks.1.norm2.bias", "encoder.blocks.1.attn.qkv.weight", "encoder.blocks.1.attn.qkv.bias", "encoder.blocks.1.attn.proj.weight", "encoder.blocks.1.attn.proj.bias", "encoder.blocks.1.mlp.fc1.weight", "encoder.blocks.1.mlp.fc1.bias", "encoder.blocks.1.mlp.fc2.weight", "encoder.blocks.1.mlp.fc2.bias", "encoder.blocks.2.norm1.weight", "encoder.blocks.2.norm1.bias", "encoder.blocks.2.norm2.weight", "encoder.blocks.2.norm2.bias", "encoder.blocks.2.attn.qkv.weight", "encoder.blocks.2.attn.qkv.bias", "encoder.blocks.2.attn.proj.weight", "encoder.blocks.2.attn.proj.bias", "encoder.blocks.2.mlp.fc1.weight", "encoder.blocks.2.mlp.fc1.bias", "encoder.blocks.2.mlp.fc2.weight", "encoder.blocks.2.mlp.fc2.bias", "encoder.blocks.3.norm1.weight", "encoder.blocks.3.norm1.bias", "encoder.blocks.3.norm2.weight", "encoder.blocks.3.norm2.bias", "encoder.blocks.3.attn.qkv.weight", "encoder.blocks.3.attn.qkv.bias", "encoder.blocks.3.attn.proj.weight", "encoder.blocks.3.attn.proj.bias", "encoder.blocks.3.mlp.fc1.weight", "encoder.blocks.3.mlp.fc1.bias", "encoder.blocks.3.mlp.fc2.weight", "encoder.blocks.3.mlp.fc2.bias", "encoder.blocks.4.norm1.weight", "encoder.blocks.4.norm1.bias", "encoder.blocks.4.norm2.weight", "encoder.blocks.4.norm2.bias", "encoder.blocks.4.attn.qkv.weight", "encoder.blocks.4.attn.qkv.bias", "encoder.blocks.4.attn.proj.weight", "encoder.blocks.4.attn.proj.bias", "encoder.blocks.4.mlp.fc1.weight", "encoder.blocks.4.mlp.fc1.bias", "encoder.blocks.4.mlp.fc2.weight", "encoder.blocks.4.mlp.fc2.bias", "encoder.blocks.5.norm1.weight", "encoder.blocks.5.norm1.bias", "encoder.blocks.5.norm2.weight", "encoder.blocks.5.norm2.bias", "encoder.blocks.5.attn.qkv.weight", "encoder.blocks.5.attn.qkv.bias", "encoder.blocks.5.attn.proj.weight", "encoder.blocks.5.attn.proj.bias", "encoder.blocks.5.mlp.fc1.weight", "encoder.blocks.5.mlp.fc1.bias", "encoder.blocks.5.mlp.fc2.weight", "encoder.blocks.5.mlp.fc2.bias", "encoder.blocks.6.norm1.weight", "encoder.blocks.6.norm1.bias", "encoder.blocks.6.norm2.weight", "encoder.blocks.6.norm2.bias", "encoder.blocks.6.attn.qkv.weight", "encoder.blocks.6.attn.qkv.bias", "encoder.blocks.6.attn.proj.weight", "encoder.blocks.6.attn.proj.bias", "encoder.blocks.6.mlp.fc1.weight", "encoder.blocks.6.mlp.fc1.bias", "encoder.blocks.6.mlp.fc2.weight", "encoder.blocks.6.mlp.fc2.bias", "encoder.blocks.7.norm1.weight", "encoder.blocks.7.norm1.bias", "encoder.blocks.7.norm2.weight", "encoder.blocks.7.norm2.bias", "encoder.blocks.7.attn.qkv.weight", "encoder.blocks.7.attn.qkv.bias", "encoder.blocks.7.attn.proj.weight", "encoder.blocks.7.attn.proj.bias", "encoder.blocks.7.mlp.fc1.weight", "encoder.blocks.7.mlp.fc1.bias", "encoder.blocks.7.mlp.fc2.weight", "encoder.blocks.7.mlp.fc2.bias", "encoder.blocks.8.norm1.weight", "encoder.blocks.8.norm1.bias", "encoder.blocks.8.norm2.weight", "encoder.blocks.8.norm2.bias", "encoder.blocks.8.attn.qkv.weight", "encoder.blocks.8.attn.qkv.bias", "encoder.blocks.8.attn.proj.weight", "encoder.blocks.8.attn.proj.bias", "encoder.blocks.8.mlp.fc1.weight", "encoder.blocks.8.mlp.fc1.bias", "encoder.blocks.8.mlp.fc2.weight", "encoder.blocks.8.mlp.fc2.bias", "encoder.blocks.9.norm1.weight", "encoder.blocks.9.norm1.bias", "encoder.blocks.9.norm2.weight", "encoder.blocks.9.norm2.bias", "encoder.blocks.9.attn.qkv.weight", "encoder.blocks.9.attn.qkv.bias", "encoder.blocks.9.attn.proj.weight", "encoder.blocks.9.attn.proj.bias", "encoder.blocks.9.mlp.fc1.weight", "encoder.blocks.9.mlp.fc1.bias", "encoder.blocks.9.mlp.fc2.weight", "encoder.blocks.9.mlp.fc2.bias", "encoder.blocks.10.norm1.weight", "encoder.blocks.10.norm1.bias", "encoder.blocks.10.norm2.weight", "encoder.blocks.10.norm2.bias", "encoder.blocks.10.attn.qkv.weight", "encoder.blocks.10.attn.qkv.bias", "encoder.blocks.10.attn.proj.weight", "encoder.blocks.10.attn.proj.bias", "encoder.blocks.10.mlp.fc1.weight", "encoder.blocks.10.mlp.fc1.bias", "encoder.blocks.10.mlp.fc2.weight", "encoder.blocks.10.mlp.fc2.bias", "encoder.blocks.11.norm1.weight", "encoder.blocks.11.norm1.bias", "encoder.blocks.11.norm2.weight", "encoder.blocks.11.norm2.bias", "encoder.blocks.11.attn.qkv.weight", "encoder.blocks.11.attn.qkv.bias", "encoder.blocks.11.attn.proj.weight", "encoder.blocks.11.attn.proj.bias", "encoder.blocks.11.mlp.fc1.weight", "encoder.blocks.11.mlp.fc1.bias", "encoder.blocks.11.mlp.fc2.weight", "encoder.blocks.11.mlp.fc2.bias", "encoder.norm.weight", "encoder.norm.bias", "encoder.head.weight", "encoder.head.bias", "decoder.cls_emb", "decoder.proj_patch", "decoder.proj_classes", "decoder.blocks.0.norm1.weight", "decoder.blocks.0.norm1.bias", "decoder.blocks.0.norm2.weight", "decoder.blocks.0.norm2.bias", "decoder.blocks.0.attn.qkv.weight", "decoder.blocks.0.attn.qkv.bias", "decoder.blocks.0.attn.proj.weight", "decoder.blocks.0.attn.proj.bias", "decoder.blocks.0.mlp.fc1.weight", "decoder.blocks.0.mlp.fc1.bias", "decoder.blocks.0.mlp.fc2.weight", "decoder.blocks.0.mlp.fc2.bias", "decoder.blocks.1.norm1.weight", "decoder.blocks.1.norm1.bias", "decoder.blocks.1.norm2.weight", "decoder.blocks.1.norm2.bias", "decoder.blocks.1.attn.qkv.weight", "decoder.blocks.1.attn.qkv.bias", "decoder.blocks.1.attn.proj.weight", "decoder.blocks.1.attn.proj.bias", "decoder.blocks.1.mlp.fc1.weight", "decoder.blocks.1.mlp.fc1.bias", "decoder.blocks.1.mlp.fc2.weight", "decoder.blocks.1.mlp.fc2.bias", "decoder.proj_dec.weight", "decoder.proj_dec.bias", "decoder.decoder_norm.weight", "decoder.decoder_norm.bias", "decoder.mask_norm.weight", "decoder.mask_norm.bias".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant