-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot Use Pretrain Weights for VGG16 #50
Comments
Hi @omid-reza since you've closed this I guess you figured it out -- this is expected behavior, since the VGG weights are only for the backbone, so the RPN and ROI heads have to be initialized from random weights. |
Thanks. I figured out the issue with that. However, when i want to used a pre-trained model, which is trained on cityspaces (VDitB), and train my own baseline model with it, i get the following error: size mismatch for roi_heads.box_predictor.cls_score.weight: copying a param with shape torch.Size([2, 1024]) from checkpoint, the shape in current model is torch.Size([11, 1024]).
size mismatch for roi_heads.box_predictor.cls_score.bias: copying a param with shape torch.Size([2]) from checkpoint, the shape in current model is torch.Size([11]).
size mismatch for roi_heads.box_predictor.bbox_pred.weight: copying a param with shape torch.Size([4, 1024]) from checkpoint, the shape in current model is torch.Size([40, 1024]).
size mismatch for roi_heads.box_predictor.bbox_pred.bias: copying a param with shape torch.Size([4]) from checkpoint, the shape in current model is torch.Size([40]). My config file is _BASE_: "../Base-RCNN-VitDetB-cityscapes.yaml"
MODEL:
ROI_HEADS:
NUM_CLASSES: 10
DATASETS:
TRAIN: ("eds_dom1_train",)
TEST: ("eds_dom1_test", "eds_dom2_test",)
BATCH_CONTENTS: ("labeled_strong", )
EMA:
ENABLED: True
SOLVER:
STEPS: (11999,)
MAX_ITER: 12000
CHECKPOINT_PERIOD: 3000
TEST:
EVAL_PERIOD: 1250
OUTPUT_DIR: "output/eds1t2cityspaces/eds1t2_vitdetb_baseline_strongaug_ema/" Base-RCNN-VitDetB-cityscapes.yaml: _BASE_: "./Base-RCNN-FPN.yaml"
MODEL:
BACKBONE:
NAME: "build_vitdet_b_backbone"
WEIGHTS: "output/cityscapes/cityscapes_vitdetb_baseline_strongaug_ema/cityscapes_foggy_val_model_best.pth"
## See detectron2/configs/common/models/mask_rcnn_vitdet.py
ROI_BOX_HEAD:
NORM: "LN"
CONV_DIM: 256
NUM_CONV: 4
FC_DIM: 1024
NUM_FC: 1
RPN:
CONV_DIMS: [-1, -1]
PIXEL_MEAN: [123.675, 116.28, 103.53]
PIXEL_STD: [58.395, 57.12, 57.375]
INPUT:
FORMAT: "RGB"
SOLVER:
IMS_PER_BATCH: 96
IMS_PER_GPU: 1
STEPS: (3200,)
MAX_ITER: 4000
OPTIMIZER: "ADAMW"
BASE_LR: 0.0002
VIT:
USE_ACT_CHECKPOINT: True The only thing that i changed in the base config file is the value for WEIGHTS. |
I figured out that when i used your provided final model that works (there will be warning that we discussed about it before), but when i train the model on my own, it raises an error. |
Can you give some more details about the error you encounter? |
Hello,
First of all, I want to appreciate your quick support.
I have used VGG16 config that is in extra branch. When i want to use pretrained model like
WEIGHTS: "models/model_final_84107b.pkl"
, however, I get the below warning and it does not use it:My base config file is:
I i use the same
detectron2/Base-RCNN-DilatedC5.yaml
that we have in the extra branch.Thanks.
The text was updated successfully, but these errors were encountered: