You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi !
I have a flexible model that I wanted to finetuned at different levels of layers, so for that I added the following code in my forward function:
with torch.no_grad():
for net in self.layers[:train_start_index]:
x = net(x)
for net in self.layers[train_start_index:output_index+1]:
x = net(x) `
Even if the model works as intended, while running the summary function of torch info, I always have the same amount of trainable parameters. Is there something I missed in the use of torchinfo, or is it just not something currently implemented ?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi !
I have a flexible model that I wanted to finetuned at different levels of layers, so for that I added the following code in my forward function:
Even if the model works as intended, while running the summary function of torch info, I always have the same amount of trainable parameters. Is there something I missed in the use of torchinfo, or is it just not something currently implemented ?
Thanks for your help !
Beta Was this translation helpful? Give feedback.
All reactions