-
Notifications
You must be signed in to change notification settings - Fork 135
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to understand the NN part, e.g., qnet.cpp #13
Comments
Hi, In general this code is building the symbolic computation graph first, and then run the forward/backward. This is similar to tensorflow's logic. Regarding the two functions you are talking about: Let me know if you have further questions. |
Hi, thanks for sharing code! Thanks |
Hi, There was a (possibly out-dated) document: https://www.cc.gatech.edu/~hdai8/graphnn/html/annotated.html Basically you add new operators to the computation graph through the 'af' function: like af(computation_graph, {list of inputs}, other arguments). But since I'm not maintaining the code base further, I would suggest to use pytorch or tensorflow for developing new models. |
Thanks for your answer! Sorry I have another question. In your paper, you have explained that we can use two seprate net, first for embedding that iterates (for example) 4 times, second for finding Q value-function. But in this code I can't see this structure and there is one net. Did you merge them? How? Thanks. |
It is trained jointly. For example, in MVC, up to this line was for embedding the graph, and after that we feed the embedding into another mlp to calculate the Q function. |
I find it is not easy to understand how does the nn works, e.g., the func 'QNet::SetupGraphInput()' and ' QNet::BuildNet()', so can you give more detailed instructions about the code? thanks a lot!
The text was updated successfully, but these errors were encountered: