You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the paper and code you used a set of 1x1 2D convolution layers to process the latent vector z in the discriminator. What was the motivation behind using 2D Convolutions versus fully connected layers or some other kind of convolutional layer? What other architectures did you try, and did you find success with any of those?
The text was updated successfully, but these errors were encountered:
Given that z is a vector, it can be interpreted as a 1x1xN stack of feature maps. In that case, a 1x1 convolution and a fully-connected layer are equivalent. I don't see how one could use another type of convolutional layer, though.
In the paper and code you used a set of 1x1 2D convolution layers to process the latent vector z in the discriminator. What was the motivation behind using 2D Convolutions versus fully connected layers or some other kind of convolutional layer? What other architectures did you try, and did you find success with any of those?
The text was updated successfully, but these errors were encountered: