You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I noticed that in your implementation, you choose to concatenate node embeddings in the end before doing classification (in the logistic layer) instead of using pooling. Was wondering if you could elaborate on that choice.
PS I'm talking about the code here (see the comment I added):
def add_logistic_layer(self):
logistic_layers = []
if self.attention_head > 0:
logistic_in_dim = [self.attention_head * self.dims[-1]]
else:
logistic_in_dim = [self.adjs[-1].shape[0] * self.dims[-1]] # THIS MEANS YOU'RE CONCATENATING
for d in logistic_in_dim:
layer = nn.Linear(d, self.out_dim)
logistic_layers.append(layer)
self.my_logistic_layers = nn.ModuleList(logistic_layers)
The text was updated successfully, but these errors were encountered:
I noticed that in your implementation, you choose to concatenate node embeddings in the end before doing classification (in the logistic layer) instead of using pooling. Was wondering if you could elaborate on that choice.
PS I'm talking about the code here (see the comment I added):
The text was updated successfully, but these errors were encountered: