Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmark Code #3

Open
backnotprop opened this issue Nov 11, 2024 · 0 comments
Open

Benchmark Code #3

backnotprop opened this issue Nov 11, 2024 · 0 comments

Comments

@backnotprop
Copy link

backnotprop commented Nov 11, 2024

Did you guys open source the benchmark code?

In own benchmark, using same models, Im struggling with general inference at 20-30% degradation regardless of model size. However I am doing mini-batch processing and would like to try the bulk batch methods as paper indicates.

Reference: https://phala.network/posts/confidential-computing-on-nvidia-h100-gpu-a-performance-benchmark-study

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant