Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

downstream tasks in norm_layer for EfficientFormerV2 #47

Open
deepkyu opened this issue Feb 6, 2023 · 0 comments
Open

downstream tasks in norm_layer for EfficientFormerV2 #47

deepkyu opened this issue Feb 6, 2023 · 0 comments

Comments

@deepkyu
Copy link

deepkyu commented Feb 6, 2023

Hi,

First of all, congrats for this awesome research 🎉

I have a simple question while reading your EfficientFormerV2 codes.

In your backbone codes for detection and segmentation, I found that norm_layers are not applied in forward_token:

if self.fork_feat and idx in self.out_indices:
# norm_layer = getattr(self, f'norm{idx}')
# x_out = norm_layer(x)
outs.append(x)

However, for your backbone in classifcation, it forwards with the norm_layer:

if self.fork_feat and idx in self.out_indices:
norm_layer = getattr(self, f'norm{idx}')
x_out = norm_layer(x)
outs.append(x_out)

Actually, it seems that the difference in the above between classification and other tasks does not occur in your EfficientFormer code.
I tried my best to find the detail in both your code and paper, but I couldn't.

So, I kindly ask you if you could explain why this should be different.

Thank you in advance.

+)
To clarify my question, I added the corresponding code lines from EfficientFormer used in segmentation:

if self.fork_feat and idx in self.out_indices:
norm_layer = getattr(self, f'norm{idx}')
if len(x.size()) != 4:
x = x.transpose(1, 2).reshape(B, C, H, W)
x_out = norm_layer(x)
outs.append(x_out)

Those seems to be the outputs from each norm layer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant