Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[torchcodec] Add support for Nvidia GPU Decoding (#58) #137

Closed
wants to merge 1 commit into from

Conversation

ahmadsharif1
Copy link
Contributor

Summary:

X-link: #58

  1. Add CUDA support to VideoDecoder.cpp. This is done by checking what device is passed into the options and using CUDA if the device type is cuda.
  2. Add -DENABLE_CUDA flag in cmake.
  3. Check ENABLE_CUDA environment variable in setup.py and pass it down to cmake if it is present.
  4. Add a unit test to demonstrate that CUDA decoding does work. This uses a different tensor than the one from CPU decoding because hardware decoding is intrinsically a bit inaccurate. I generated the reference tensor by dumping the tensor from the GPU on my devVM. It is possible different Nvidia hardware show different outputs. How to test this in a more robust way is TBD.
  5. Added a new parameter for cuda device index for add_video_stream. If this is present, we will use it to do hardware decoding on a CUDA device.

There is a whole bunch of TODOs:

  1. Currently GPU utilization is only 7-8% when decoding the video. We need to get this higher.
  2. Speed it up compared to CPU implementation. Currently this is slower than CPU decoding even for HD videos (probably because we can't hide the CPU to GPU memcpy). However, decode+resize is faster as the benchmark says.

Reviewed By: scotts

Differential Revision: D59121006

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Meta Open Source bot. label Jul 31, 2024
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D59121006

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D59121006

ahmadsharif1 added a commit to ahmadsharif1/torchcodec that referenced this pull request Jul 31, 2024
Summary:
Pull Request resolved: pytorch#137

Pull Request resolved: pytorch#58

X-link: pytorch#58

1. Add CUDA support to VideoDecoder.cpp. This is done by checking what device is passed into the options and using CUDA if the device type is cuda.
2. Add -DENABLE_CUDA flag in cmake.
3. Check ENABLE_CUDA environment variable in setup.py and pass it down to cmake if it is present.
4. Add a unit test to demonstrate that CUDA decoding does work. This uses a different tensor than the one from CPU decoding because hardware decoding is intrinsically a bit inaccurate. I generated the reference tensor by dumping the tensor from the GPU on my devVM. It is possible different Nvidia hardware show different outputs. How to test this in a more robust way is TBD.
5. Added a new parameter for cuda device index for `add_video_stream`. If this is present, we will use it to do hardware decoding on a CUDA device.

There is a whole bunch of TODOs:
1. Currently GPU utilization is only 7-8% when decoding the video. We need to get this higher.
2. Speed it up compared to CPU implementation. Currently this is slower than CPU decoding even for HD videos (probably because we can't hide the CPU to GPU memcpy). However, decode+resize is faster as the benchmark says.

Reviewed By: scotts

Differential Revision: D59121006
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D59121006

ahmadsharif1 added a commit to ahmadsharif1/torchcodec that referenced this pull request Aug 1, 2024
Summary:
Pull Request resolved: pytorch#137

Pull Request resolved: pytorch#58

X-link: pytorch#58

1. Add CUDA support to VideoDecoder.cpp. This is done by checking what device is passed into the options and using CUDA if the device type is cuda.
2. Add -DENABLE_CUDA flag in cmake.
3. Check ENABLE_CUDA environment variable in setup.py and pass it down to cmake if it is present.
4. Add a unit test to demonstrate that CUDA decoding does work. This uses a different tensor than the one from CPU decoding because hardware decoding is intrinsically a bit inaccurate. I generated the reference tensor by dumping the tensor from the GPU on my devVM. It is possible different Nvidia hardware show different outputs. How to test this in a more robust way is TBD.
5. Added a new parameter for cuda device index for `add_video_stream`. If this is present, we will use it to do hardware decoding on a CUDA device.

There is a whole bunch of TODOs:
1. Currently GPU utilization is only 7-8% when decoding the video. We need to get this higher.
2. Speed it up compared to CPU implementation. Currently this is slower than CPU decoding even for HD videos (probably because we can't hide the CPU to GPU memcpy). However, decode+resize is faster as the benchmark says.

Reviewed By: scotts

Differential Revision: D59121006
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D59121006

ahmadsharif1 added a commit to ahmadsharif1/torchcodec that referenced this pull request Aug 1, 2024
Summary:
Pull Request resolved: pytorch#137

Pull Request resolved: pytorch#58

X-link: pytorch#58

1. Add CUDA support to VideoDecoder.cpp. This is done by checking what device is passed into the options and using CUDA if the device type is cuda.
2. Add -DENABLE_CUDA flag in cmake.
3. Check ENABLE_CUDA environment variable in setup.py and pass it down to cmake if it is present.
4. Add a unit test to demonstrate that CUDA decoding does work. This uses a different tensor than the one from CPU decoding because hardware decoding is intrinsically a bit inaccurate. I generated the reference tensor by dumping the tensor from the GPU on my devVM. It is possible different Nvidia hardware show different outputs. How to test this in a more robust way is TBD.
5. Added a new parameter for cuda device index for `add_video_stream`. If this is present, we will use it to do hardware decoding on a CUDA device.

There is a whole bunch of TODOs:
1. Currently GPU utilization is only 7-8% when decoding the video. We need to get this higher.
2. Speed it up compared to CPU implementation. Currently this is slower than CPU decoding even for HD videos (probably because we can't hide the CPU to GPU memcpy). However, decode+resize is faster as the benchmark says.

Reviewed By: scotts

Differential Revision: D59121006
ahmadsharif1 added a commit to ahmadsharif1/torchcodec that referenced this pull request Aug 1, 2024
Summary:
Pull Request resolved: pytorch#137

Pull Request resolved: pytorch#58

X-link: pytorch#58

1. Add CUDA support to VideoDecoder.cpp. This is done by checking what device is passed into the options and using CUDA if the device type is cuda.
2. Add -DENABLE_CUDA flag in cmake.
3. Check ENABLE_CUDA environment variable in setup.py and pass it down to cmake if it is present.
4. Add a unit test to demonstrate that CUDA decoding does work. This uses a different tensor than the one from CPU decoding because hardware decoding is intrinsically a bit inaccurate. I generated the reference tensor by dumping the tensor from the GPU on my devVM. It is possible different Nvidia hardware show different outputs. How to test this in a more robust way is TBD.
5. Added a new parameter for cuda device index for `add_video_stream`. If this is present, we will use it to do hardware decoding on a CUDA device.

There is a whole bunch of TODOs:
1. Currently GPU utilization is only 7-8% when decoding the video. We need to get this higher.
2. Speed it up compared to CPU implementation. Currently this is slower than CPU decoding even for HD videos (probably because we can't hide the CPU to GPU memcpy). However, decode+resize is faster as the benchmark says.

Differential Revision: D59121006

Reviewed By: scotts
Summary:
Pull Request resolved: pytorch#137

Pull Request resolved: pytorch#58

X-link: pytorch#58

1. Add CUDA support to VideoDecoder.cpp. This is done by checking what device is passed into the options and using CUDA if the device type is cuda.
2. Add -DENABLE_CUDA flag in cmake.
3. Check ENABLE_CUDA environment variable in setup.py and pass it down to cmake if it is present.
4. Add a unit test to demonstrate that CUDA decoding does work. This uses a different tensor than the one from CPU decoding because hardware decoding is intrinsically a bit inaccurate. I generated the reference tensor by dumping the tensor from the GPU on my devVM. It is possible different Nvidia hardware show different outputs. How to test this in a more robust way is TBD.
5. Added a new parameter for cuda device index for `add_video_stream`. If this is present, we will use it to do hardware decoding on a CUDA device.

There is a whole bunch of TODOs:
1. Currently GPU utilization is only 7-8% when decoding the video. We need to get this higher.
2. Speed it up compared to CPU implementation. Currently this is slower than CPU decoding even for HD videos (probably because we can't hide the CPU to GPU memcpy). However, decode+resize is faster as the benchmark says.

Reviewed By: scotts

Differential Revision: D59121006
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D59121006

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 8fee167.

ahmadsharif1 added a commit that referenced this pull request Aug 15, 2024
NicolasHug added a commit to NicolasHug/torchcodec that referenced this pull request Aug 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Meta Open Source bot. fb-exported Merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants