Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there any way to set a consistent seed value to keep similar details between long sequences for videos cut up in parts to save VRAM? #22

Open
redman4585 opened this issue Feb 7, 2025 · 3 comments

Comments

@redman4585
Copy link

If I run a long video sequence and cut it up in parts to save VRAM within StereoCrafter, the overall details and textures change with every clip of the same sequence.

Is there a way to keep a consistent seed value so that when someone processes a video and cuts it in parts to later edit into a long sequence the details are the same?

I think the default stable video diffusion seed value is -1, which means the detail value will be different every time. This might be the reason why the camel demo cannot be replicated exactly. So if someone set a seed value to something like "12345" and everybody used that, they should all get similar results.

However I can't find a way to change the seed value to a known number in inpainting_inference.py.

    video_latents = spatial_tiled_process(
        input_frames_i,
        mask_frames_i,
        pipeline,
        tile_num,
        spatial_n_compress=8,
        min_guidance_scale=1.01,
        max_guidance_scale=1.01,
        decode_chunk_size=8,
        fps=7,
        motion_bucket_id=127,
        noise_aug_strength=0.0,
        num_inference_steps=8,
@xiaoyu258
Copy link
Contributor

For reproducibility, you could refer to this https://pytorch.org/docs/stable/notes/randomness.html to generate the same results.

For the video clips in different cuts, it is hard to keep consistent even with the same random seed since the input content is different.

@redman4585
Copy link
Author

Thanks. This is mostly for static scenes where the gap being filled has alot of detail. My hope is that the texture stays identical between cuts of the last frame.

@redman4585
Copy link
Author

redman4585 commented Feb 16, 2025

Another thing you can do with Stereocrafter is to use partial frames from the previous chunk to maintain color accuracy in the next chunk.

That should be doable.

You can have it as a setting just for long sequences.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants