Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[torchcodec] Add support for Nvidia GPU Decoding #58

Closed
wants to merge 0 commits into from
Closed

[torchcodec] Add support for Nvidia GPU Decoding #58

wants to merge 0 commits into from

Conversation

ahmadsharif1
Copy link
Contributor

Summary:

  1. Add CUDA support to VideoDecoder.cpp. This is done by checking what device is passed into the options and using CUDA if the device type is cuda.
  2. Add -DENABLE_CUDA flag in cmake.
  3. Check ENABLE_CUDA environment variable in setup.py and pass it down to cmake if it is present.
  4. Add a unit test to demonstrate that CUDA decoding does work. This uses a different tensor than the one from CPU decoding because hardware decoding is intrinsically a bit inaccurate. I generated the reference tensor by dumping the tensor from the GPU on my devVM. It is possible different Nvidia hardware show different outputs. How to test this in a more robust way is TBD.
  5. Added a new parameter for cuda device index for add_video_stream. If this is present, we will use it to do hardware decoding on a CUDA device.

Differential Revision: D59121006

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Meta Open Source bot. label Jul 1, 2024
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D59121006

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D59121006

5 similar comments
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D59121006

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D59121006

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D59121006

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D59121006

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D59121006

@ahmadsharif1 ahmadsharif1 deleted the export-D59121006 branch July 31, 2024 21:32
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D59121006

3 similar comments
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D59121006

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D59121006

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D59121006

ahmadsharif1 added a commit to ahmadsharif1/torchcodec that referenced this pull request Jul 31, 2024
Summary:
X-link: pytorch#58

1. Add CUDA support to VideoDecoder.cpp. This is done by checking what device is passed into the options and using CUDA if the device type is cuda.
2. Add -DENABLE_CUDA flag in cmake.
3. Check ENABLE_CUDA environment variable in setup.py and pass it down to cmake if it is present.
4. Add a unit test to demonstrate that CUDA decoding does work. This uses a different tensor than the one from CPU decoding because hardware decoding is intrinsically a bit inaccurate. I generated the reference tensor by dumping the tensor from the GPU on my devVM. It is possible different Nvidia hardware show different outputs. How to test this in a more robust way is TBD.
5. Added a new parameter for cuda device index for `add_video_stream`. If this is present, we will use it to do hardware decoding on a CUDA device.

There is a whole bunch of TODOs:
1. Currently GPU utilization is only 7-8% when decoding the video. We need to get this higher.
2. Speed it up compared to CPU implementation. Currently this is slower than CPU decoding even for HD videos (probably because we can't hide the CPU to GPU memcpy). However, decode+resize is faster as the benchmark says.

Differential Revision: D59121006
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D59121006

4 similar comments
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D59121006

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D59121006

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D59121006

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D59121006

ahmadsharif1 added a commit to ahmadsharif1/torchcodec that referenced this pull request Jul 31, 2024
Summary:
Pull Request resolved: pytorch#58

X-link: pytorch#58

1. Add CUDA support to VideoDecoder.cpp. This is done by checking what device is passed into the options and using CUDA if the device type is cuda.
2. Add -DENABLE_CUDA flag in cmake.
3. Check ENABLE_CUDA environment variable in setup.py and pass it down to cmake if it is present.
4. Add a unit test to demonstrate that CUDA decoding does work. This uses a different tensor than the one from CPU decoding because hardware decoding is intrinsically a bit inaccurate. I generated the reference tensor by dumping the tensor from the GPU on my devVM. It is possible different Nvidia hardware show different outputs. How to test this in a more robust way is TBD.
5. Added a new parameter for cuda device index for `add_video_stream`. If this is present, we will use it to do hardware decoding on a CUDA device.

There is a whole bunch of TODOs:
1. Currently GPU utilization is only 7-8% when decoding the video. We need to get this higher.
2. Speed it up compared to CPU implementation. Currently this is slower than CPU decoding even for HD videos (probably because we can't hide the CPU to GPU memcpy). However, decode+resize is faster as the benchmark says.

Reviewed By: scotts

Differential Revision: D59121006
ahmadsharif1 added a commit to ahmadsharif1/torchcodec that referenced this pull request Jul 31, 2024
Summary:
Pull Request resolved: pytorch#137

Pull Request resolved: pytorch#58

X-link: pytorch#58

1. Add CUDA support to VideoDecoder.cpp. This is done by checking what device is passed into the options and using CUDA if the device type is cuda.
2. Add -DENABLE_CUDA flag in cmake.
3. Check ENABLE_CUDA environment variable in setup.py and pass it down to cmake if it is present.
4. Add a unit test to demonstrate that CUDA decoding does work. This uses a different tensor than the one from CPU decoding because hardware decoding is intrinsically a bit inaccurate. I generated the reference tensor by dumping the tensor from the GPU on my devVM. It is possible different Nvidia hardware show different outputs. How to test this in a more robust way is TBD.
5. Added a new parameter for cuda device index for `add_video_stream`. If this is present, we will use it to do hardware decoding on a CUDA device.

There is a whole bunch of TODOs:
1. Currently GPU utilization is only 7-8% when decoding the video. We need to get this higher.
2. Speed it up compared to CPU implementation. Currently this is slower than CPU decoding even for HD videos (probably because we can't hide the CPU to GPU memcpy). However, decode+resize is faster as the benchmark says.

Reviewed By: scotts

Differential Revision: D59121006
ahmadsharif1 added a commit to ahmadsharif1/torchcodec that referenced this pull request Aug 1, 2024
Summary:
Pull Request resolved: pytorch#137

Pull Request resolved: pytorch#58

X-link: pytorch#58

1. Add CUDA support to VideoDecoder.cpp. This is done by checking what device is passed into the options and using CUDA if the device type is cuda.
2. Add -DENABLE_CUDA flag in cmake.
3. Check ENABLE_CUDA environment variable in setup.py and pass it down to cmake if it is present.
4. Add a unit test to demonstrate that CUDA decoding does work. This uses a different tensor than the one from CPU decoding because hardware decoding is intrinsically a bit inaccurate. I generated the reference tensor by dumping the tensor from the GPU on my devVM. It is possible different Nvidia hardware show different outputs. How to test this in a more robust way is TBD.
5. Added a new parameter for cuda device index for `add_video_stream`. If this is present, we will use it to do hardware decoding on a CUDA device.

There is a whole bunch of TODOs:
1. Currently GPU utilization is only 7-8% when decoding the video. We need to get this higher.
2. Speed it up compared to CPU implementation. Currently this is slower than CPU decoding even for HD videos (probably because we can't hide the CPU to GPU memcpy). However, decode+resize is faster as the benchmark says.

Reviewed By: scotts

Differential Revision: D59121006
ahmadsharif1 added a commit to ahmadsharif1/torchcodec that referenced this pull request Aug 1, 2024
Summary:
Pull Request resolved: pytorch#137

Pull Request resolved: pytorch#58

X-link: pytorch#58

1. Add CUDA support to VideoDecoder.cpp. This is done by checking what device is passed into the options and using CUDA if the device type is cuda.
2. Add -DENABLE_CUDA flag in cmake.
3. Check ENABLE_CUDA environment variable in setup.py and pass it down to cmake if it is present.
4. Add a unit test to demonstrate that CUDA decoding does work. This uses a different tensor than the one from CPU decoding because hardware decoding is intrinsically a bit inaccurate. I generated the reference tensor by dumping the tensor from the GPU on my devVM. It is possible different Nvidia hardware show different outputs. How to test this in a more robust way is TBD.
5. Added a new parameter for cuda device index for `add_video_stream`. If this is present, we will use it to do hardware decoding on a CUDA device.

There is a whole bunch of TODOs:
1. Currently GPU utilization is only 7-8% when decoding the video. We need to get this higher.
2. Speed it up compared to CPU implementation. Currently this is slower than CPU decoding even for HD videos (probably because we can't hide the CPU to GPU memcpy). However, decode+resize is faster as the benchmark says.

Reviewed By: scotts

Differential Revision: D59121006
ahmadsharif1 added a commit to ahmadsharif1/torchcodec that referenced this pull request Aug 1, 2024
Summary:
Pull Request resolved: pytorch#137

Pull Request resolved: pytorch#58

X-link: pytorch#58

1. Add CUDA support to VideoDecoder.cpp. This is done by checking what device is passed into the options and using CUDA if the device type is cuda.
2. Add -DENABLE_CUDA flag in cmake.
3. Check ENABLE_CUDA environment variable in setup.py and pass it down to cmake if it is present.
4. Add a unit test to demonstrate that CUDA decoding does work. This uses a different tensor than the one from CPU decoding because hardware decoding is intrinsically a bit inaccurate. I generated the reference tensor by dumping the tensor from the GPU on my devVM. It is possible different Nvidia hardware show different outputs. How to test this in a more robust way is TBD.
5. Added a new parameter for cuda device index for `add_video_stream`. If this is present, we will use it to do hardware decoding on a CUDA device.

There is a whole bunch of TODOs:
1. Currently GPU utilization is only 7-8% when decoding the video. We need to get this higher.
2. Speed it up compared to CPU implementation. Currently this is slower than CPU decoding even for HD videos (probably because we can't hide the CPU to GPU memcpy). However, decode+resize is faster as the benchmark says.

Differential Revision: D59121006

Reviewed By: scotts
ahmadsharif1 added a commit to ahmadsharif1/torchcodec that referenced this pull request Aug 1, 2024
Summary:
Pull Request resolved: pytorch#137

Pull Request resolved: pytorch#58

X-link: pytorch#58

1. Add CUDA support to VideoDecoder.cpp. This is done by checking what device is passed into the options and using CUDA if the device type is cuda.
2. Add -DENABLE_CUDA flag in cmake.
3. Check ENABLE_CUDA environment variable in setup.py and pass it down to cmake if it is present.
4. Add a unit test to demonstrate that CUDA decoding does work. This uses a different tensor than the one from CPU decoding because hardware decoding is intrinsically a bit inaccurate. I generated the reference tensor by dumping the tensor from the GPU on my devVM. It is possible different Nvidia hardware show different outputs. How to test this in a more robust way is TBD.
5. Added a new parameter for cuda device index for `add_video_stream`. If this is present, we will use it to do hardware decoding on a CUDA device.

There is a whole bunch of TODOs:
1. Currently GPU utilization is only 7-8% when decoding the video. We need to get this higher.
2. Speed it up compared to CPU implementation. Currently this is slower than CPU decoding even for HD videos (probably because we can't hide the CPU to GPU memcpy). However, decode+resize is faster as the benchmark says.

Reviewed By: scotts

Differential Revision: D59121006
facebook-github-bot pushed a commit that referenced this pull request Aug 1, 2024
Summary:
Pull Request resolved: #137

Pull Request resolved: #58

X-link: #58

1. Add CUDA support to VideoDecoder.cpp. This is done by checking what device is passed into the options and using CUDA if the device type is cuda.
2. Add -DENABLE_CUDA flag in cmake.
3. Check ENABLE_CUDA environment variable in setup.py and pass it down to cmake if it is present.
4. Add a unit test to demonstrate that CUDA decoding does work. This uses a different tensor than the one from CPU decoding because hardware decoding is intrinsically a bit inaccurate. I generated the reference tensor by dumping the tensor from the GPU on my devVM. It is possible different Nvidia hardware show different outputs. How to test this in a more robust way is TBD.
5. Added a new parameter for cuda device index for `add_video_stream`. If this is present, we will use it to do hardware decoding on a CUDA device.

There is a whole bunch of TODOs:
1. Currently GPU utilization is only 7-8% when decoding the video. We need to get this higher.
2. Speed it up compared to CPU implementation. Currently this is slower than CPU decoding even for HD videos (probably because we can't hide the CPU to GPU memcpy). However, decode+resize is faster as the benchmark says.

Reviewed By: scotts

Differential Revision: D59121006

fbshipit-source-id: da6faa60c8de5d8e6ad90f8897d339c9979005f1
@ilbash
Copy link

ilbash commented Oct 24, 2024

When it will be available for using ?

@ahmadsharif1
Copy link
Contributor Author

It's available if you build torchcodec from source by installing pre-reqs and running:

CMAKE_BUILD_PARALLEL_LEVEL=8 CMAKE_BUILD_TYPE=Release ENABLE_CUDA=1 pip install -e . --no-build-isolation -vv

We plan to release it to pip later this year.

@ilbash
Copy link

ilbash commented Oct 24, 2024

@ahmadsharif1 Thanks for your answer! Do you plan to support RTSP gpu decoding ?

@ahmadsharif1
Copy link
Contributor Author

ahmadsharif1 commented Nov 11, 2024

@Ilyabasharov I believe RTSP should work if FFMPEG is configured correctly in your environment.

You can run ffmpeg -demuxers to see if your ffmpeg binaries are configured with rtsp.

If you are building ffmpeg yourself you can add this to your configure line to get that:

--enable-demuxer=rtsp

Also, pip binaries are available now on pytorch nightly:

pip3 install --pre torchcodec --index-url https://download.pytorch.org/whl/nightly/cu124

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Meta Open Source bot. fb-exported
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants