-
Notifications
You must be signed in to change notification settings - Fork 95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement new renderers and utilitary methods #64
base: main
Are you sure you want to change the base?
Conversation
d185da1
to
c37a688
Compare
c37a688
to
187c223
Compare
Hey @LeMilenov I was trying out your pull request and noticed it has a hardcoded mesh obj and some other files it asks for in mitsuba_default? What exactly are we supposed to put in that folder and any instructions for preparing that folder? |
@skyler14 For now, Mitsuba 3 uses scenes to render and we needs a mesh and textures to initialize and create an overhead. Those values (mesh data, textures, etc.) are overwritten at runtime. I just kept it hardcoded for now because I haven't had the time to generate those values through code directly.
|
Mostly @LeMilenov . Can these just be empty files with the right name or they need to I tried feeding it some dummy obj I had floating around and this was the error I got. Also are there environment/python/cuda version changes to your code given that you set out to make this RTX compatible (which would imply running things in cuda 11):
|
@skyler14 in my testing I have worked with 256x256 images (textures, maps, input image, output image) and 512x512. Usually in the config file there is a maxRes number, modify it to the resolution you want and then feed an input image of this size and textures of that size. For my setup : |
hello @abecadel, https://mitsuba.readthedocs.io/en/stable/src/generated/plugins_films.html |
im actually getting the same exception when I try running this fork with the drive folder
example you posted. Can you leave some environment info as well (did you
stay with cuda 10 or go to 11, and what gpu/OS)
…On Sun, Sep 17, 2023 at 12:24 PM Daniel Milenov Milanov < ***@***.***> wrote:
hello @abecadel <https://github.com/abecadel>,
the issue seems to be coming from your mitsuba scene description, could
you share it ? Your error message is mentioning to have a gaussian filter
it should look like something like this :
[image: image]
<https://user-images.githubusercontent.com/38773186/268501391-5dfce87a-7f35-46b2-b301-6d542cd8311d.png>
https://mitsuba.readthedocs.io/en/stable/src/generated/plugins_films.html
But the gaussian filter should already be the default. I would need more
information on your scene parameters to help reproduce the error.
—
Reply to this email directly, view it on GitHub
<#64 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAGPJKB7F4ZDZ73QJIHNYJTX23TZVANCNFSM6AAAAAA4HZDR5U>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
@LeMilenov Mitsuba scene is the same as in mitsuba renderer, I haven't done any changes to it, just using your branch ;) |
@abecadel I have tried the same code and I have no problems. Can you send me the version of your dependencies ? |
@abecadel @skyler14 here is my setup : Windows 11 Pro Package Version absl-py 1.4.0 |
Here's a google colab notebook I'm running it on |
I see in your debugs that you are using a different version of mediapipe and a different version of mitsuba and of drjit. You also seem to be using python 3.10 and linux. Can you try with the same versions as I did so that it can help me pinpoint the issue ? |
You did put most of your environment info but failed to mention if youre
still using python 3.6.7 like the original repo?
…On Mon, Sep 18, 2023 at 5:24 PM Daniel Milenov Milanov < ***@***.***> wrote:
Here's a google colab notebook I'm running it on NextFace_Mitsuba.zip
<https://github.com/abdallahdib/NextFace/files/12650021/NextFace_Mitsuba.zip>
I see in your debugs that you are using a different version of mediapipe
and a different version of mitsuba and of drjit. You also seem to be using
python 3.10 and linux.
Can you try with the same versions as I did so that it can help me
pinpoint the issue ?
also what graphics card and what does your architecture looks like ?
—
Reply to this email directly, view it on GitHub
<#64 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAGPJKG7TXSC7RSCE4DG5ETX3B7UNANCNFSM6AAAAAA4HZDR5U>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
|
Thanks, I remade the environment using your guidance, I'm getting through alot of the process and writing files but still getting those exceptions, have to kill the entire terminal. Before I think redner-gpu doesn't show a valid version when I try to pip install with 3.9 so I used the image here (since were not using redner directly I just did this to make sure it wouldnt throw errors while running mitsuba). BachiLi/redner#169 (comment) Note: I used the emily example and adjusted things to 256 in the config as you mentioned Here's the error:
Here's my installation (Win 10, 3090):
|
Leaking in Mitsuba 3 is quite common. This particular error is when Python finishes; it needs to destroy the different allocated objects. The objects are destroyed in an undefined order, which makes it difficult to manage smart pointers (internally used in DrJit implementation). More information: mitsuba-renderer/drjit#87 However, the log that you posted shows that you successfully optimized the shape. |
so is there a particular module(s) that would be easy enough to manually |
It will not fill up your memory. All the allocated objects are deleted after your script finishes. It is just the order of the delete operations that trigger this "error". Please look at my previous attached issue to learn more. |
A bigger issue is the program never closes. That terminal session just hangs after the exception, its a bit hard to tell if any allocation remains, and the program freezes on that exception message so I literally have to kill the terminal session and start a new one again. |
@LeMilenov when running in mediapipe mode I noticed this error, not sure if your render code doesnt account for when mediapipe does detecting I think some coordinate systems are normalized [0,1]
|
I did not have this issue when I tested mediapipe on the latest version but, I did have this issue in the past. This issue is pretty straightforward, the texture is exceeding the range. There should be a clamp between 0,1 before rendering. Its just a warning it happens sometimes on specific use cases where the optimisation converges towards extreme colors. |
Got it @LeMilenov , When you start rendering is it relying the contents of the the _map images to perform its projection onto the face or just the coefficient terms, when you load the output pickles it seems to preserve all of these so I assume they all are used to generate the final rendered face inverse. I am going to test out some mesh refinements.
I'm curious if you could explain a bit more about which of these Relating to the exception I mentioned: Looking at the local variables right before ending execution below. Any ideas which objects have the drjit related modules I should see about manually deleting to maybe prevent this error
|
@skyler14 here is how it works : |
So since mitsuba/redner use the finalized textures instead of the coefficients directly if we perform any further stages of improvement on the texture then we can improve our final render correct? But we can't perform a processing stage on the intermediate textures each iteration since that's just an output (and isn't directly fed back in/used to calculate loss). But post-processing is on the table if I understand correctly. And another small question: |
@skyler14 we use spherical harmonics coefficients for the illumination. and yes since the textures are an output you can do post -processing or since the repos is open-source you can do your own post-processing and just re-use the textures inside the implementation. |
42046b9
to
25b9291
Compare
great, if I do upsample the final texture what changes need to be made to the vertexes to support the upsampling? (or is the texture atlas placed onto the vertexes by relative location so you just need to preserve the aspect ratio) |
@LeMilenov AttributeError: jit_init_thread_state(): the CUDA backend hasn't been initialized. Make sure to call jit_init(JitBackend::CUDA) to properly initialize this backend. My package list: Package Version absl-py 1.4.0 Any help would be appreciated please =) |
@Loammm what have you tried running ? what action were you trying to do ? I am asking because usually in the python files or jupyter notebooks, we initialise Cuda and other things. |
Two questions: If I want to align the face with another image of the same face what should I be doing. Iirc the version using the other renderer sort of supported this out the gate by allowing you to train with multiple pictures at once. What's missing here in order to either reintroduce that feature or at least allow us to do then fit the single image geometry back onto another picture of the same person? And second, the reconstructed face on an image (vs when we load up the .obj file in an application it) looks so much better and the obj looks horrible. What exactly is going to to make that image reconstruction look so much better, is that the mitsuba render a point cloud or something similar? |
Implementation of the following renderers
Mitusba is a well documented ray tracing differentiable renderer that supports :
for more information, refer to the mitsuba 3 documentation.
Vertex based is a deterministic method that gives fast results and can help debug complex architectures
Mitsuba gives up to 5x faster results than Redner and captures more small geometric details.