Colorizing black and white images via the use of AutoEncoders.
Autoencoders are a specific type of feedforward neural networks where the input is the same as the output. They compress the input into a lower-dimensional code and then reconstruct the output from this representation. The code is a compact “summary” or “compression” of the input, also called the latent-space representation.
An autoencoder consists of 3 components:
- encoder : compresses the input and produces the code
- code
- decoder : reconstructs the input only using this code.
To build an autoencoder, we need :
- Encoding Method
- Decoding Method
- Loss Function to compare the outputs with the target
With the application of AutoEncoders, I am trying to colorize Black and white images, by providing these images as an input and a colored version of these images as a decoded output. The detailed model with implementation can be found here.
The COCO-2017 Dataset's validation directory is used over here in the model due to the system's limited storage capacity.
A training dataset of 3000 images depicted a promising result, performing best in images constituing flora and fauna.