Skip to content

opendot/ml4a-invisible-cities

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ml4a-invisible-cities

A project made during "Machine Learning for Artists workshop" with Gene Kogan @Opendotlab

See the full website + gallery: http://opendot.github.io/ml4a-invisible-cities/

Concept

“With cities, it is as with dreams: everything imaginable can be dreamed, but even the most unexpected dream is a rebus that conceals a desire or, its reverse, a fear. Cities, like dreams, are made of desires and fears, even if the thread of their discourse is secret, their rules are absurd, their perspectives deceitful, and everything conceals something else.”

Italo Calvino, Invisible Cities

The idea is to create an imaginary city from a hand-drawn sketch. Trained with aerial images of real cities, a neural network can transform this into a realistic bird-eye-view city. Then, switching between different cities references it would be possible to generate different views of the same imaginary city.

How it works

We were fascinated by the possibility of generating new and non-existent but realistic images using conditional adversarial neural networks that remembers a certain set of features from the things it has seen in the past: the same process that we humans undergo when we dream.

Dataset

Taking inspiration from the given examples, we applied a pre-defined color scheme to geographic data (OpenStreetMap) using Mapbox Studio: roads, green spaces, buildings, water were styled with different colours (black, green, red, blue), so that the neural network (NN) could compare these to aerial images and learn the different features.

Training, evaluating, running

We then used vvvv as a tool to collect both satellite imagery and associated labeled map tiles. We trained a conditional generative adversarial network to recontruct the satellite imagery from its map tiles.

It then produces a set of images according to the unique characteristics of each city: the same blue shade will translate to a venetian canal or a simple river, red will became a 17th century villa or a 50s modernist house in the hills of L.A.

To encompass the variability of all geographic features, we left the background as plain white. This translated to unexpected results as the NN could interpret the same white patch of land as an airport, a maize field or a dumpster.

Gallery

City style transfer

With this technique, we fed map tiles of one city to the generative model of another city, producing sattelite imagery of the former in the style of the latter.

Imaginary maps

Here we feed completely handdrawn tiles to the models, producing hallucinations of cities.

Team

  • Gene Kogan
  • Gabriele Gambotto
  • Ambhika Samsen
  • Andrej Boleslavsky
  • Michele Ferretti
  • Damiano Gui
  • Fabian Frei

Credits

All credit for the algorithm development to “Image-to-Image Translation Using Conditional Adversarial Networks” by
Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, Alexei A. Efros published in arxiv, 2016.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages