You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
apply the same transposed convolution (TC) to the input x twice,
results in x1, x2 with x1 == x2. In code, x1 corresponds to deconv and x2 to g.
to both outputs of the TC, a different learned bias is added: x1 += b1, x2 += b2
leaky relu is applied x1 = lrelu(x1, 0.2)
x2 is overwritten: g = tf.nn.sigmoid(deconv)
x1 = x1 * x2 which is equivalent to lrelu(conv(x) + b1, 0.2) * sigmoid(lrelu(conv(x) + b1), 0.2))
Two things seem kind of strange and raised following questions:
Why do we apply the same convolution to create the mask and features? Shouldn't we use two different convolutions as we do with gated convolution? Also, why the different biases then?
Shouldn't we apply sigmoid to g instead of deconv?
Issue #37 also mentions this, and yes you're right. You can check my implementation here. Although not completed yet, but i've mostly re-implemented it in keras.
Issue #37 also mentions this, and yes you're right. You can check my implementation here. Although not completed yet, but i've mostly re-implemented it in keras.
What the code (in
ops.py
) currently does is:x
twice,results in
x1
,x2
withx1 == x2
. In code,x1
corresponds todeconv
andx2
tog
.x1 += b1
,x2 += b2
x1 = lrelu(x1, 0.2)
x2
is overwritten:g = tf.nn.sigmoid(deconv)
x1 = x1 * x2
which is equivalent tolrelu(conv(x) + b1, 0.2) * sigmoid(lrelu(conv(x) + b1), 0.2))
Two things seem kind of strange and raised following questions:
g
instead ofdeconv
?Thanks in advance!
For convenience i posted the relevant code below:
The text was updated successfully, but these errors were encountered: