Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

metaformer has no positional encoding? #14

Open
bio-mlhui opened this issue Jun 15, 2024 · 1 comment
Open

metaformer has no positional encoding? #14

bio-mlhui opened this issue Jun 15, 2024 · 1 comment

Comments

@bio-mlhui
Copy link

I notice that Metaformer has no positional encoding(PE) either in the attention layers or at the model input, does this affect the performance? Is the positional encoding not necessary? What if metaformer is equipped with 2D sin-cos/learned PE?

@yuweihao
Copy link
Collaborator

@bio-mlhui, thanks for your attention.

For ConvFormer, a pure CNN model, positional encoding is not necessary.

For CAFormer, its two first stages are conv, each patch "knows" which patches are nearby. I remember adding positional encoding after the first two stages and before the third stages, does not influence the performance on ImageNet. For simplicity, I did not add positional encoding in my implementation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants