Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to train a 256*128 image dataset and output 256*128 result? #63

Open
GuiQuLaiXi opened this issue Oct 15, 2018 · 3 comments
Open

Comments

@GuiQuLaiXi
Copy link

parser.add_argument('--imageSize', type=int, default=64, help='the height / width of the input image to network')
the default imagesize is 6464?how to define a new image size like mn

@zyoohv
Copy link

zyoohv commented Oct 23, 2018

If you want to train with 256*128 images, you must change the structure of the network.

For example, the code use (2, 2) step, you need change it to (2, 1):

self.main.add_module('initial_conv_{}'.format(ch), nn.Conv2d(3, ch, (2, 1), (2, 1), 0, bias=bias))

Than you can produce the images with size of 256*128。

@GuiQuLaiXi
Copy link
Author

If you want to train with 256*128 images, you must change the structure of the network.

For example, the code use (2, 2) step, you need change it to (2, 1):

self.main.add_module('initial_conv_{}'.format(ch), nn.Conv2d(3, ch, (2, 1), (2, 1), 0, bias=bias))

Than you can produce the images with size of 256*128。

thank you ,but the default input imagesize is NxN,do i need to change something else?

@zyoohv
Copy link

zyoohv commented Oct 23, 2018

@GuiQuLaiXi

Yes, all you need to change are the layers both in generator and discriminator.

What's more, you may need to prepare your dataloader which produces the images with shape of 256*128.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants