You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm experiencing significantly lower inference performance compared to the paper's reported results. While the paper mentions achieving 47.2 FPS on a single A100 GPU, I only get around 25-26 FPS on an RTX 3090. And I do the inference with prompting only one object for the first frame.
Questions
Is this performance difference expected given the hardware differences (RTX 3090 vs A100)?
How many objects are selected for the first frame?
I would greatly appreciate any guidance on optimizing the performance for my setup.
The text was updated successfully, but these errors were encountered:
Environment
GPU: NVIDIA RTX 3090
VRAM: 24GB
Operating System: Ubuntu 20.04
Model: sam2.1_hiera_tiny.pt
Issue Description
I'm experiencing significantly lower inference performance compared to the paper's reported results. While the paper mentions achieving 47.2 FPS on a single A100 GPU, I only get around 25-26 FPS on an RTX 3090. And I do the inference with prompting only one object for the first frame.
Questions
Is this performance difference expected given the hardware differences (RTX 3090 vs A100)?
How many objects are selected for the first frame?
I would greatly appreciate any guidance on optimizing the performance for my setup.
The text was updated successfully, but these errors were encountered: