-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[torchcodec] Add support for Nvidia GPU Decoding #58
Conversation
This pull request was exported from Phabricator. Differential Revision: D59121006 |
This pull request was exported from Phabricator. Differential Revision: D59121006 |
5 similar comments
This pull request was exported from Phabricator. Differential Revision: D59121006 |
This pull request was exported from Phabricator. Differential Revision: D59121006 |
This pull request was exported from Phabricator. Differential Revision: D59121006 |
This pull request was exported from Phabricator. Differential Revision: D59121006 |
This pull request was exported from Phabricator. Differential Revision: D59121006 |
This pull request was exported from Phabricator. Differential Revision: D59121006 |
3 similar comments
This pull request was exported from Phabricator. Differential Revision: D59121006 |
This pull request was exported from Phabricator. Differential Revision: D59121006 |
This pull request was exported from Phabricator. Differential Revision: D59121006 |
Summary: X-link: pytorch#58 1. Add CUDA support to VideoDecoder.cpp. This is done by checking what device is passed into the options and using CUDA if the device type is cuda. 2. Add -DENABLE_CUDA flag in cmake. 3. Check ENABLE_CUDA environment variable in setup.py and pass it down to cmake if it is present. 4. Add a unit test to demonstrate that CUDA decoding does work. This uses a different tensor than the one from CPU decoding because hardware decoding is intrinsically a bit inaccurate. I generated the reference tensor by dumping the tensor from the GPU on my devVM. It is possible different Nvidia hardware show different outputs. How to test this in a more robust way is TBD. 5. Added a new parameter for cuda device index for `add_video_stream`. If this is present, we will use it to do hardware decoding on a CUDA device. There is a whole bunch of TODOs: 1. Currently GPU utilization is only 7-8% when decoding the video. We need to get this higher. 2. Speed it up compared to CPU implementation. Currently this is slower than CPU decoding even for HD videos (probably because we can't hide the CPU to GPU memcpy). However, decode+resize is faster as the benchmark says. Differential Revision: D59121006
This pull request was exported from Phabricator. Differential Revision: D59121006 |
4 similar comments
This pull request was exported from Phabricator. Differential Revision: D59121006 |
This pull request was exported from Phabricator. Differential Revision: D59121006 |
This pull request was exported from Phabricator. Differential Revision: D59121006 |
This pull request was exported from Phabricator. Differential Revision: D59121006 |
Summary: Pull Request resolved: pytorch#58 X-link: pytorch#58 1. Add CUDA support to VideoDecoder.cpp. This is done by checking what device is passed into the options and using CUDA if the device type is cuda. 2. Add -DENABLE_CUDA flag in cmake. 3. Check ENABLE_CUDA environment variable in setup.py and pass it down to cmake if it is present. 4. Add a unit test to demonstrate that CUDA decoding does work. This uses a different tensor than the one from CPU decoding because hardware decoding is intrinsically a bit inaccurate. I generated the reference tensor by dumping the tensor from the GPU on my devVM. It is possible different Nvidia hardware show different outputs. How to test this in a more robust way is TBD. 5. Added a new parameter for cuda device index for `add_video_stream`. If this is present, we will use it to do hardware decoding on a CUDA device. There is a whole bunch of TODOs: 1. Currently GPU utilization is only 7-8% when decoding the video. We need to get this higher. 2. Speed it up compared to CPU implementation. Currently this is slower than CPU decoding even for HD videos (probably because we can't hide the CPU to GPU memcpy). However, decode+resize is faster as the benchmark says. Reviewed By: scotts Differential Revision: D59121006
Summary: Pull Request resolved: pytorch#137 Pull Request resolved: pytorch#58 X-link: pytorch#58 1. Add CUDA support to VideoDecoder.cpp. This is done by checking what device is passed into the options and using CUDA if the device type is cuda. 2. Add -DENABLE_CUDA flag in cmake. 3. Check ENABLE_CUDA environment variable in setup.py and pass it down to cmake if it is present. 4. Add a unit test to demonstrate that CUDA decoding does work. This uses a different tensor than the one from CPU decoding because hardware decoding is intrinsically a bit inaccurate. I generated the reference tensor by dumping the tensor from the GPU on my devVM. It is possible different Nvidia hardware show different outputs. How to test this in a more robust way is TBD. 5. Added a new parameter for cuda device index for `add_video_stream`. If this is present, we will use it to do hardware decoding on a CUDA device. There is a whole bunch of TODOs: 1. Currently GPU utilization is only 7-8% when decoding the video. We need to get this higher. 2. Speed it up compared to CPU implementation. Currently this is slower than CPU decoding even for HD videos (probably because we can't hide the CPU to GPU memcpy). However, decode+resize is faster as the benchmark says. Reviewed By: scotts Differential Revision: D59121006
Summary: Pull Request resolved: pytorch#137 Pull Request resolved: pytorch#58 X-link: pytorch#58 1. Add CUDA support to VideoDecoder.cpp. This is done by checking what device is passed into the options and using CUDA if the device type is cuda. 2. Add -DENABLE_CUDA flag in cmake. 3. Check ENABLE_CUDA environment variable in setup.py and pass it down to cmake if it is present. 4. Add a unit test to demonstrate that CUDA decoding does work. This uses a different tensor than the one from CPU decoding because hardware decoding is intrinsically a bit inaccurate. I generated the reference tensor by dumping the tensor from the GPU on my devVM. It is possible different Nvidia hardware show different outputs. How to test this in a more robust way is TBD. 5. Added a new parameter for cuda device index for `add_video_stream`. If this is present, we will use it to do hardware decoding on a CUDA device. There is a whole bunch of TODOs: 1. Currently GPU utilization is only 7-8% when decoding the video. We need to get this higher. 2. Speed it up compared to CPU implementation. Currently this is slower than CPU decoding even for HD videos (probably because we can't hide the CPU to GPU memcpy). However, decode+resize is faster as the benchmark says. Reviewed By: scotts Differential Revision: D59121006
Summary: Pull Request resolved: pytorch#137 Pull Request resolved: pytorch#58 X-link: pytorch#58 1. Add CUDA support to VideoDecoder.cpp. This is done by checking what device is passed into the options and using CUDA if the device type is cuda. 2. Add -DENABLE_CUDA flag in cmake. 3. Check ENABLE_CUDA environment variable in setup.py and pass it down to cmake if it is present. 4. Add a unit test to demonstrate that CUDA decoding does work. This uses a different tensor than the one from CPU decoding because hardware decoding is intrinsically a bit inaccurate. I generated the reference tensor by dumping the tensor from the GPU on my devVM. It is possible different Nvidia hardware show different outputs. How to test this in a more robust way is TBD. 5. Added a new parameter for cuda device index for `add_video_stream`. If this is present, we will use it to do hardware decoding on a CUDA device. There is a whole bunch of TODOs: 1. Currently GPU utilization is only 7-8% when decoding the video. We need to get this higher. 2. Speed it up compared to CPU implementation. Currently this is slower than CPU decoding even for HD videos (probably because we can't hide the CPU to GPU memcpy). However, decode+resize is faster as the benchmark says. Reviewed By: scotts Differential Revision: D59121006
Summary: Pull Request resolved: pytorch#137 Pull Request resolved: pytorch#58 X-link: pytorch#58 1. Add CUDA support to VideoDecoder.cpp. This is done by checking what device is passed into the options and using CUDA if the device type is cuda. 2. Add -DENABLE_CUDA flag in cmake. 3. Check ENABLE_CUDA environment variable in setup.py and pass it down to cmake if it is present. 4. Add a unit test to demonstrate that CUDA decoding does work. This uses a different tensor than the one from CPU decoding because hardware decoding is intrinsically a bit inaccurate. I generated the reference tensor by dumping the tensor from the GPU on my devVM. It is possible different Nvidia hardware show different outputs. How to test this in a more robust way is TBD. 5. Added a new parameter for cuda device index for `add_video_stream`. If this is present, we will use it to do hardware decoding on a CUDA device. There is a whole bunch of TODOs: 1. Currently GPU utilization is only 7-8% when decoding the video. We need to get this higher. 2. Speed it up compared to CPU implementation. Currently this is slower than CPU decoding even for HD videos (probably because we can't hide the CPU to GPU memcpy). However, decode+resize is faster as the benchmark says. Differential Revision: D59121006 Reviewed By: scotts
Summary: Pull Request resolved: pytorch#137 Pull Request resolved: pytorch#58 X-link: pytorch#58 1. Add CUDA support to VideoDecoder.cpp. This is done by checking what device is passed into the options and using CUDA if the device type is cuda. 2. Add -DENABLE_CUDA flag in cmake. 3. Check ENABLE_CUDA environment variable in setup.py and pass it down to cmake if it is present. 4. Add a unit test to demonstrate that CUDA decoding does work. This uses a different tensor than the one from CPU decoding because hardware decoding is intrinsically a bit inaccurate. I generated the reference tensor by dumping the tensor from the GPU on my devVM. It is possible different Nvidia hardware show different outputs. How to test this in a more robust way is TBD. 5. Added a new parameter for cuda device index for `add_video_stream`. If this is present, we will use it to do hardware decoding on a CUDA device. There is a whole bunch of TODOs: 1. Currently GPU utilization is only 7-8% when decoding the video. We need to get this higher. 2. Speed it up compared to CPU implementation. Currently this is slower than CPU decoding even for HD videos (probably because we can't hide the CPU to GPU memcpy). However, decode+resize is faster as the benchmark says. Reviewed By: scotts Differential Revision: D59121006
Summary: Pull Request resolved: #137 Pull Request resolved: #58 X-link: #58 1. Add CUDA support to VideoDecoder.cpp. This is done by checking what device is passed into the options and using CUDA if the device type is cuda. 2. Add -DENABLE_CUDA flag in cmake. 3. Check ENABLE_CUDA environment variable in setup.py and pass it down to cmake if it is present. 4. Add a unit test to demonstrate that CUDA decoding does work. This uses a different tensor than the one from CPU decoding because hardware decoding is intrinsically a bit inaccurate. I generated the reference tensor by dumping the tensor from the GPU on my devVM. It is possible different Nvidia hardware show different outputs. How to test this in a more robust way is TBD. 5. Added a new parameter for cuda device index for `add_video_stream`. If this is present, we will use it to do hardware decoding on a CUDA device. There is a whole bunch of TODOs: 1. Currently GPU utilization is only 7-8% when decoding the video. We need to get this higher. 2. Speed it up compared to CPU implementation. Currently this is slower than CPU decoding even for HD videos (probably because we can't hide the CPU to GPU memcpy). However, decode+resize is faster as the benchmark says. Reviewed By: scotts Differential Revision: D59121006 fbshipit-source-id: da6faa60c8de5d8e6ad90f8897d339c9979005f1
When it will be available for using ? |
It's available if you build torchcodec from source by installing pre-reqs and running:
We plan to release it to pip later this year. |
@ahmadsharif1 Thanks for your answer! Do you plan to support RTSP gpu decoding ? |
@Ilyabasharov I believe RTSP should work if FFMPEG is configured correctly in your environment. You can run If you are building ffmpeg yourself you can add this to your configure line to get that:
Also, pip binaries are available now on pytorch nightly:
|
Summary:
add_video_stream
. If this is present, we will use it to do hardware decoding on a CUDA device.Differential Revision: D59121006