Skip to content

Style Transfer -- a real-time implementation on mobile devices

Notifications You must be signed in to change notification settings

1627180283/real-time-Style-Transfer

Repository files navigation

Style Transfer -- a real-time implementation on mobile devices

中文版本

Introduction

This repository contains a Pytorch-based implementation of "Perceptual Losses for Real-Time Style Transfer and Super-Resolution". We make some modifications to it to make it rearch real-time on Android devices. (It cost 30ms when the input size is 256x256 on Snapdragon 845)

image

srcnormalslim

        source image                   paper method                   our method

video

srctransfer

      source video         style video(run it on Snapdragon845)

download pretrain model

style normal slim
udnie google drive google drive

Model architecture

architecture


Requirements

You will need the following to run this project:

  • Python3
  • PIL
  • Pytorch
  • torchvision
  • cv2 (opencv-python)
  • onnxruntime (optinal)

How to run it by yourself   (。•ᴗ-)_

  • Preparation

    • You should download this project and checkout your environment.

    • Download dataset and put it under the folder, which named data. Unlike in the paper, we choose Pascal VOC 2012 instead of Microsoft COCO. If we use Microsoft COCO, we will cost more time to download it.

      If you use Pacal VOC 2012, you will get a folder named VOCdevkit after unpack it, put all the images in the VOCdevkit/VOC2012/JPEGImages under data folder.

      If you don't want to use data foler, you can use the argument of train.py -- dataset to indicate the path to training set. Later, we will introduce in detail.

    • Download pretrained VGG16 and put it under the pretrain_models.

  • Training

    • Selsect the picture you like and put it under the style_imgs folder. We provide 2 styles, mosaic and udnie.

    • Enter this project folder in command line mode. You can use the following code to start training your personal model:

      python train.py --style_image=./style_imgs/xxx.jpg

      where xxx.jpg is the name of the style image which you choose

    If you want to do some personal settings or you don't follow above steps to set file location, click here to get more details.

  • Other

    • When you finish training, you can use checkout.py to view the result of your model.

      python checkout.py --model_path=./models/style-transfer/xxx.pth

        where _ ./models/style-transfer/xxx.pth_ is the path to the model.

        You can find more information aoubt checkout.py's arguments in the code.

    • If you want to deploy it on mobile phone, you can use pytorch2onnx.py to export your model to onnx,then export to other formats which can run on mobile devices. You can use checkout.py to test your onnx file.

      I use ncnn to deploy it. You can use this tool to convert your onnx file to the file can be recognized by ncnn. When I set thread=4 and input_size=(256, 256), slim model run on Snapdragon cost 30 ms.

      I still have some problems on Android that have not been resolved. So, I will provide my Android code later, when I solve all problem.

About

Style Transfer -- a real-time implementation on mobile devices

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages