Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Where to concatenate content features of node? Should it be before attention network? #26

Open
kftam1994 opened this issue Apr 16, 2024 · 1 comment

Comments

@kftam1994
Copy link

If there are content features of node, such as descriptions and categorical information of the product, where should this frozen embedding (e.g. if sentence embedding with size, number of node*number of embedding dimension) be concatenated in the model?

My understanding is to concatenate to the node embedding before attention network, similar to "Reversed Position Embedding", so it becomes node's representation from graph + Reversed Position Embedding + frozen content features embedding. Is this understanding correct?

Thank you

@CCIIPLab
Copy link
Owner

Thank you for your attention to our work. We believe that the implementation method you proposed is reasonable. There is also another way, which is to concatenate the context features and node embedding before inputting them into the graph neural network.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants