You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
But in official documents, there is a saying that Each function object is meant to be used only once (in the forward pass). I found the self.special_spmm forward twice in
`e_rowsum = self.special_spmm(edge, edge_e, torch.Size([N, N]), torch.ones(size=(N,1), device=dv))
# e_rowsum: N x 1
edge_e = self.dropout(edge_e)
# edge_e: E
# Each function object is meant to be used only once (in the forward pass).
h_prime = self.special_spmm(edge, edge_e, torch.Size([N, N]), h)`
Have I misunderstand sth.?
The text was updated successfully, but these errors were encountered:
I noticed that SpecialSpmmFunction is the subclass of torch.autograd.Function and there is only one object in class SpGraphAttentionLayer.
`class SpGraphAttentionLayer(nn.Module):
def init(self, in_features, out_features, dropout, alpha, concat=True):
super(SpGraphAttentionLayer, self).init()
self.in_features = in_features
self.out_features = out_features
self.alpha = alpha
self.concat = concat
But in official documents, there is a saying that Each function object is meant to be used only once (in the forward pass). I found the self.special_spmm forward twice in
`e_rowsum = self.special_spmm(edge, edge_e, torch.Size([N, N]), torch.ones(size=(N,1), device=dv))
# e_rowsum: N x 1
Have I misunderstand sth.?
The text was updated successfully, but these errors were encountered: