Skip to content

It is possible to finetune the model freezing some layers? #121

Answered by davide97l
davide97l asked this question in Q&A
Discussion options

You must be logged in to vote

Solved by modifying train.py in the following way

@hydra.main(version_base="1.3", config_name="default.yaml")
def main(cfg: DictConfig):
    if cfg.tf32:
        assert cfg.trainer.precision == 32
        torch.backends.cuda.matmul.allow_tf32 = True
        torch.backends.cudnn.allow_tf32 = True

    model: L.LightningModule = instantiate(cfg.model, _convert_="all")

    #for name, module in model.named_children():
    #    print(f"Module name: {name}")
    #    print(module)

    learnable_layers = []
    frozen_layers = []
    learnable_layers_keywords = ["param_proj", "in_proj"]
    for name, param in model.named_parameters():
        if any(k in name for k in learnable_layers_keywords…

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by davide97l
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
1 participant