Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PSF and lensless data simulator using python-only raytracing #10

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
29 changes: 29 additions & 0 deletions examples/raytracing_simulator/README.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
This code is a raytracing PSF simulator that was part of a project on LenslessPiCam:
https://github.com/LCAV/LenslessPiCam

It is provided as is and likely needs some adaptation to be runnable as it depends on the aforementioned project.

Is is also quite unefficient and serves more as a demonstration purpose, especially since the raytracing is done entirely in python, and that most of the methods in simulator/utils can certainly be replaced by their respective equivalents from common libraries such as opencv or similar.

If you look for a more efficient code, please check this other branch of the repository, proposing a solution based on the excellent mitsuba:
https://github.com/Julien-Sahli/waveprop/tree/mitsuba



To give it a try, run:

scripts/simulator/generate_psf.py to generate a psf from a mask / height map / normal map
scripts/simulator/render_scene.py to generate lensless data from a psf and a scene

use the examples provided in the data folder!

To generate the scene, run scripts/conversion/blender_export.py directly from blender (see instructions in the script)

The lensless data should be reconstructible with LenslessPiCam, but the associated psf need to be padded with black so that its resolution matches the one from the lensless data, because of how LenslessPiCam works. It also may need to be rotated of 180°.



Also check the corresponding medium posts!

https://medium.com/@julien.sahli/simulating-lensless-camera-psfs-with-ray-tracing-a224ca11f758
https://medium.com/@julien.sahli/simulating-lensless-camera-data-from-3d-scenes-a3da3daf50e
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
Generate a new PSF using a mask :
python ../../../../scripts/simulator/generate_psf.py --input_fp=mask.png --output_fp=result.npy --mode=mask

Export the PSF to tiff images to display them :
python ../../../../scripts/conversion/npy_to_tiff.py result.npy

(Mask credits : Julien Sahli)
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
Generate a new PSF using a normal map :
python ../../../../scripts/simulator/generate_psf.py --input_fp=normals.jpg --output_fp=result.npy --mode=normals

Export the PSF to tiff images to display them :
python ../../../../scripts/conversion/npy_to_tiff.py result.npy

Oversample the normal map to produce a greater output quality :
python ../../../../scripts/simulator/generate_psf.py --input_fp=normals.jpg --output_fp=result_highres.npy --mode=normals --oversample=2

And export it
python ../../../../scripts/conversion/npy_to_tiff.py result2.npy

(Normal map credits : https://www.cadhatch.com/seamless-water-textures)
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
Generate a new PSF using a height map :
python ../../../../scripts/simulator/generate_psf.py --input_fp=heights.png --output_fp=result.npy --mode=height --save_normals

Export the PSF to tiff images to display them :
python ../../../../scripts/conversion/npy_to_tiff.py result.npy

Try to change the size of the Sobel kernel operator to a smaller value and make the image sharper (min is 3, default is 7)
python ../../../../scripts/simulator/generate_psf.py --input_fp=heights.png --output_fp=result_sharp.npy --mode=height --save_normals --sobel_size=3

Export the PSF to tiff images to display them :
python ../../../../scripts/conversion/npy_to_tiff.py result_sharp.npy

Try to change the size of the Sobel kernel operator to a bigger value and make the image smoother
python ../../../../scripts/simulator/generate_psf.py --input_fp=heights.png --output_fp=result_smooth.npy --mode=height --save_normals --sobel_size=23

Export the PSF to tiff images to display them :
python ../../../../scripts/conversion/npy_to_tiff.py result_smooth.npy


(Height map credits : Julien Sahli)
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
Try different camera settings for generating PSFs.

The list of settings can be displayed with the following :
python ../../../../scripts/simulator/store_settings.py --help

Unspecified settings will be left to default.

Try the following different scenes :
python ../../../../scripts/simulator/store_settings.py --path=default --scene_min_depth=0.1 --scene_max_depth=1
python ../../../../scripts/simulator/store_settings.py --path=close --scene_min_depth=0.2 --scene_max_depth=0.7
python ../../../../scripts/simulator/store_settings.py --path=far --scene_min_depth=2 --scene_max_depth=5
python ../../../../scripts/simulator/store_settings.py --path=small-sensor --sensor_width=0.8
python ../../../../scripts/simulator/store_settings.py --path=big-sensor --sensor_width=5
python ../../../../scripts/simulator/store_settings.py --path=squeezed --diffuser_width=1 --diffuser_height=0.5
python ../../../../scripts/simulator/store_settings.py --path=thick --diffuser_thickness=1

Generate and export the psf as usual, but try the different cameras
python ../../../../scripts/simulator/generate_psf.py --input_fp=heights.png --output_fp=result.npy --mode=height --oversample 0.5 --save_normals --camera_fp default-cam.npy

python ../../../../scripts/conversion/npy_to_tiff.py result.npy

(Height map credits : Julien Sahli)
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
Demonstrate the light refraction using a normal map.
First, save the following camera settings :
python ../../../../scripts/simulator/store_settings.py --path=settings --scene_min_depth=0.1 --scene_max_depth=1

Generathe the psf :
python ../../../../scripts/simulator/generate_psf.py --input_fp=normals.png --output_fp=result.npy --mode=normals --oversample 0.5 --dest_shape 10 300 300 --camera_fp settings-cam.npy --opengl

Export the PSF to tiff images to display them :
python ../../../../scripts/conversion/npy_to_tiff.py result.npy

When generating the psf, try with and without the opengl flag to parse the normals with either opengl or directx convention and see the difference.
(This normal map use opengl convention, hence using directx will produce a wrong psf, with y-component of the normals inverted)

(Normal map credits : CC BY 4.0 Julian Herzog - https://en.wikipedia.org/wiki/Normal_mapping)
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
A scene that is small enough to be rendered with the raytracer. We will compare it to the convolution version after that.

python ../../../../scripts/simulator/render_scene.py --mode=raytracing --radiance_fp=scene.png --depths_fp=scene_dep.png --psf_fp=psf.png --out_fp=out.png --dest_shape 42 42 --camera_fp good-cam.npy --scene_fp good-scene.npy
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
begin by rendering the corresponding psf :
python ../../../../scripts/simulator/generate_psf.py --input_fp=psf.png --output_fp=psf.npy --mode=mask --camera_fp=demo-cam.npy

then render the scene:
python ../../../../scripts/simulator/render_scene.py --radiance_fp=scene.png --depths_fp=scene_dep.png --psf_fp=psf.npy --out_fp=out.png --camera_fp=demo-cam.npy --scene_fp=demo-scene.npy
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
Copy the psf generated in psf_generation/3-height_map
Render the following scene:
python ../../../../scripts/simulator/render_scene.py --radiance_fp=scene.png --depths_fp=scene_dep.png --psf_fp=psf.npy --out_fp=out.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Empty file.
65 changes: 65 additions & 0 deletions examples/raytracing_simulator/scripts/conversion/blender_export.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
import numpy as np

try:
import bpy

except:
print("\nError : This script needs to be run from Blender, not from the Lensless environment."
"\nRead the instructions inside the file to continue.\n")
quit()


"""
This script allows to export a rgb image and a depth map from a blender scene.
It is meant to be run directly in blender, not from the lensless environment

The exported scenes can then be used in the simulator to generate lensless data

Credits:
The contents of these file are based on "Generate Depth and Normal Maps with Blender", Saif Khan, 26.12.2021,
under the Creative Commons Attribution 4.0 International License : https://creativecommons.org/licenses/by/4.0/
Link to the original work : https://www.saifkhichi.com/blog/blender-depth-map-surface-normals

Instructions :
- Load or create any scene of your choice in blender (https://docs.blender.org/)
- Lots of scenes can be downloaded freely from websites such as https://www.blendswap.com/
- You may want to set the background color of the scene to black ; otherwise, it will be
considered by the simulator as a physical plane which will be placed at the maximum depth
of the scene. To do so, the "Layout" tab, search the "World" menu on the right and, in the
"Surface" sub-menu, change the "Color" field to black.
If you forgot this step, you can still manually edit the exported image later to the image
editor of your choice in order to change the background pixels to black.

- In the "Layout" tab, search the "View Layer Properties" menu on the right and mark the "Combined" and "Z" boxes

- In the "Compositing" tab, add the following nodes :
- Tick "Use Nodes" to create two nodes : "Render Layer" and "Composite"
- Select "Add" -> "Output" -> "Viewer" to create a new node of the same name.
- Select "Add" -> "Vector" -> "Normalize" to create a new node of the same name.

- Still in the "Compositing" tab, connect the nodes in the following way :
- Render Layer's field "Image" should already be connected to Composite's field Image. If not, connect it now.
- Connect Render Layer's field "Depth" to Normalize's input, which is the "Value" field at the bottom left.
- Connect Normalize's output, which is the "Value" field at the top right, to Viewer's field "Image"
- In Composite node, "Use Alpha" should already be ticked. If not, tick it now.
- In Viewer node, "Use Alpha" should already be ticked. If not, tick it now.

- Still in the "Compositing" tab, in the Render Layer node, click on the Render button on bottom right

- In the "Scripting" tab, go in the Text Editor field. It should be in the middle by default ; if not the case, open it
with the shortcut Shift+F11. Open the current field in it. Set the output path to your liking, then run the script.

- Your data should now be properly exported at the specified path !

"""

output_path = "/your/custom/path/"

bpy.context.scene.render.filepath = output_path + "scene.png"
bpy.ops.render.render(False, animation=False, write_still=True)

data = bpy.data.images['Viewer Node']
w, h = data.size
depths = np.fliplr(np.rot90(np.reshape(np.array(data.pixels[:], dtype=np.float32), (h, w, 4))[:,:,0], k=2))

np.save(output_path + "scene-normals.npy", depths)
30 changes: 30 additions & 0 deletions examples/raytracing_simulator/scripts/conversion/mat_to_npy.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
"""
This script is used to export the .mat paf from https://github.com/Waller-Lab/DiffuserCam/tree/master/example_data
The output consists of the usable .npy file as well as tiff images for user visualisation
"""

import os
import sys
import numpy as np
import scipy.io as sp
from PIL import Image


if len(sys.argv) < 2:
print("Error : no filename provided. Aborting.")
sys.exit()

filename = sys.argv[1]
if not filename.endswith(".mat"):
print("Error : file is not a .mat file. Aborting")
sys.exit()

img = np.array(list(sp.loadmat(filename).values()),dtype=object)[3]

img = img - np.min(img)
img = img / np.max(img)

img = np.swapaxes(img,0,-1)
img = np.swapaxes(img,1,2)

np.save("psf.npy", img)
70 changes: 70 additions & 0 deletions examples/raytracing_simulator/scripts/conversion/npy_to_obj.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
import sys
import numpy as np
from lensless.util import resize3d

if len(sys.argv) < 2:
print("Error : no filename provided. Aborting.")
sys.exit()

filename = sys.argv[1]
if not filename.endswith(".npy"):
print("Error : file is not a .npy file. Aborting")
sys.exit()

data = np.load(filename)

#sum color channels for now
if len(data.shape) == 4:
data = np.sum(data, axis=3)

factor = 1.0/float(sys.argv[2]) if len(sys.argv) > 2 else 1
data = resize3d(data, factor)


if np.max(data) > 0:
data = data / np.max(data)
else :
print("Error : data has no positive value. Aborting.")
sys.exit()

# default value of mean^0.5 is an heuristic
threshold = pow(np.mean(data), 0.5) if len(sys.argv) == 2 else sys.argv[2]
print("threshold : ", threshold)

data_shape = data.shape

output_file = open(filename.replace(".npy", ".obj"), "w")

i = 0
for z in range(data_shape[0]):
print("converting depth layer ", z+1, "/", data_shape[0])
for x in range(data_shape[1]):
for y in range(data_shape[2]):
v = data[z, x, y]
if v >= threshold:
v = v/2

output_file.write(
"v " + str(x) + " " + str(y) + " " + str(z-v) + "\n" +
"v " + str(x) + " " + str(y) + " " + str(z+v) + "\n" +
"v " + str(x) + " " + str(y-v) + " " + str(z) + "\n" +
"v " + str(x) + " " + str(y+v) + " " + str(z) + "\n" +
"v " + str(x-v) + " " + str(y) + " " + str(z) + "\n" +
"v " + str(x+v) + " " + str(y) + " " + str(z) + "\n" +
"\n" +
"f " + str(i+1) + " " + str(i+3) + " " + str(i+5) + "\n" +
"f " + str(i+1) + " " + str(i+3) + " " + str(i+6) + "\n" +
"f " + str(i+1) + " " + str(i+4) + " " + str(i+5) + "\n" +
"f " + str(i+1) + " " + str(i+4) + " " + str(i+6) + "\n" +
"f " + str(i+2) + " " + str(i+3) + " " + str(i+5) + "\n" +
"f " + str(i+2) + " " + str(i+3) + " " + str(i+6) + "\n" +
"f " + str(i+2) + " " + str(i+4) + " " + str(i+5) + "\n" +
"f " + str(i+2) + " " + str(i+4) + " " + str(i+6) + "\n" +
"\n\n"
)
i += 6


output_file.close()


90 changes: 90 additions & 0 deletions examples/raytracing_simulator/scripts/conversion/npy_to_tiff.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
# This script is used to export the .mat paf from https://github.com/Waller-Lab/DiffuserCam/tree/master/example_data
# The output consists of the usable .npy file as well as tiff images for user visualisation

import os
import sys
import numpy as np
import cv2

if len(sys.argv) < 2:
print("Error : no filename provided. Aborting.")
sys.exit()

filename = sys.argv[1]
if not filename.endswith(".npy"):
print("Error : file is not a .npy file. Aborting")
sys.exit()

out_path = os.path.splitext(filename)[0] # removing the file extension from the path, if any

data = np.load(filename).astype(np.float32)
data_shape = data.shape
l = len(data_shape)

print("\nInput shape :", data_shape)

if l == 2:
print("As the shape has length 2, it will be interpreted as a single-layer grayscale image.")
grayscale = True
single_depth = True

elif l == 3:
print("As the shape has length 3, could either be a multi-layer grayscale image or a single-layer rgb image.")
if data_shape[2] == 3:
print("As the third dimension of the data is 3, it will be interpreted as a single-layer rgb image"
"with data corresponding to (width, height, color channel).")
grayscale = False
single_depth = True
else:
print("As the third dimension of the data is not, it will be interpreted as a multi-layer grayscale image "
"with data corresponding to (depth, width, height).")
grayscale = True
single_depth = False

elif l == 4:
print("As the shape has length 4, it will be interpreted as a multi-layer rgb image.")
grayscale = False
single_depth = False

else :
print("Error : data shape has invalid length :", l, ", but should be 2, 3, or 4")
grayscale = None
single_depth = None
quit()

if single_depth:
if grayscale:
if cv2.imwrite(out_path + "-out.tiff", data):
print("Data exported succesfully in the", out_path + "-out.tiff file.")
else :
print("Error while exporting data in the", out_path + "-out.tiff file.")
else:
if cv2.imwrite(out_path + "-out.tiff", cv2.cvtColor(data.astype(np.uint8), cv2.COLOR_RGB2BGR)):
print("Data exported succesfully in the", out_path + "-out.tiff file.")
else:
print("Error while exporting data in the", out_path + "-out.tiff file.")

else :
print("As the data has several depth layers, it will be stored in the", out_path +"-out directory.")

if os.path.exists(out_path + "-out/."):
print("Directory already existing. The files inside will be replaced.")
else:
print("Directory not existing yet, it will be created.")
os.mkdir(out_path + "-out/")

for i in range(data_shape[0]):
path = out_path + "-out/layer" + ("0" if i < 10 else "") + str(i) + ".tiff"
if grayscale:
if cv2.imwrite(path, data[i]):
print("Data exported succesfully in the", path, "file.")
else:
print("Error while exporting data in the", path, "file.")
else:
if cv2.imwrite(path, cv2.cvtColor(data[i].astype(np.uint8), cv2.COLOR_RGB2BGR)):
print("Data exported succesfully in the", path, "file.")
else:
print("Error while exporting data in the", path, "file.")



Loading