Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About dataset format and running the code #4

Open
Abhinav43 opened this issue Apr 3, 2019 · 1 comment
Open

About dataset format and running the code #4

Abhinav43 opened this issue Apr 3, 2019 · 1 comment

Comments

@Abhinav43
Copy link

Abhinav43 commented Apr 3, 2019

Hello Ma,

I am trying to run your code but it requires

  • 'allx',
  • 'ally',
  • 'graph',
  • "adjmat",
  • "trainMask",
  • "valMask",
  • "testMask"

I checked the preprocessing files and https://github.com/matenure/FastGCN/tree/master/data
for data format but couldn't find enough resources.

Since most of the graph convolutional networks are based on node prediction so their format are different i think?

Can you provide a simple preprocessing script with few artificial datapoints, That would help to understand the shape of all format placeholders.
If I am trying to run this code on cora dataset I am getting error :

ValueError: shapes (1,2708) and (1,1) not aligned: 2708 (dim 1) != 1 (dim 0)

my data shapes look like this:

  • print(np.array(adjs).shape), >> (2709,)
  • print(features.shape) , >> (1708, 1433)
  • print(x.shape) , >> (1708, 1433)
  • print(y.shape) , >> (1708, 7)
  • print(train_mask.shape) , >> (2708,)
  • print(test_mask.shape), >> (2708,)
  • print(val_mask.shape) >> (2708,)

Can you share the correct shape format?

Second I was going through the paper (https://arxiv.org/pdf/1804.10850.pdf) , Paper says :

Assume we have adjacency matrix Au for view u, we assign attention weights g u ∈ R N∗N to the graph edges, such that the integrated adjacency matrix becomes sigma u g u A u where is the element wise multiplication.

But in implementation you are not using element wise multiplication, also I am not clear about how you are concatenating with 0? If i am getting right then final mixedADJ will be same shape as original adj?

def attention(self):
        self.attweights = tf.get_variable("attWeights",[self.num_support, self.output_dim],initializer=tf.contrib.layers.xavier_initializer())
        
        #self.attbiases = tf.get_variable("attBiases",[self.num_support, self.output_dim],initializer=tf.contrib.layers.xavier_initializer())
        attention = []
        self.attADJ = []
        for i in range(self.num_support):
            #tmpattention = tf.matmul(tf.reshape(self.attweights[i],[1,-1]), self.adjs[i])+tf.reshape(self.attbiases[i],[1,-1])
            tmpattention = tf.matmul(tf.reshape(self.attweights[i], [1, -1]), self.adjs[i])
            #tmpattention = tf.reshape(self.attweights[i],[1,-1]) #test the performance of non-attentive vector weights
            attention.append(tmpattention)
        print("attention_sie",attention.size)
        attentions = tf.concat(0, attention)
        self.attention = tf.nn.softmax(attentions,0)
        for i in range(self.num_support):
            self.attADJ.append(tf.matmul(tf.diag(self.attention[i]),self.adjs[i]))

        self.mixedADJ = tf.add_n(self.attADJ)

Thank you
Keep writing and keep sharing good work.
Looking forward to your reply, Thank you :)

@matenure
Copy link
Owner

matenure commented Apr 8, 2019

Assume we have 222 drugs, here are the shapes for each variable:
adjs: a list with 4 elements, each adjs[i] with shape: (222, 222)
features/x: (222, 582)
y: (222, 222)
train_mask: (222, 222)
val_mask: (222, 222)
test_mask: (222, 222)

Yes, the final mixedADJ will have the same shape as original adj.

I can upload the masks and adjs in this repo for reference.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants