-
Notifications
You must be signed in to change notification settings - Fork 232
OE4T Meeting Notes 2022 08 11
Dan Walkes edited this page Aug 13, 2022
·
1 revision
9
- Gitter vs Slack
- Main missing feature identified is the ability to add emoticon response to let others know you’ve seen a message
- Plan to continue using Gitter.
- Jetpack 5
- Testing will happen with non public early access kit released at the beginning of July, has a few additional features beyond the public release in March, not huge changes.
- Still lacking secure boot support on Orin. Jetpack 5 GA should be imminent.
- Once the Jetpack 5 GA is available, the master branch on meta-tegra will be Jetpack 5. Then will backport to kirkstone
- Will also add a kirkstone-l4t-32.7 branch to stay on Jetpack 4.6 release branches.
- Clara AGX Recipes and Holoscan
- See https://github.com/NVIDIA/meta-tegra-clara-holoscan-mgx
- Clara AGX is rebranded as Holoscan. Next devkit will be Orin. See https://www.nvidia.com/en-us/clara/medical-devices/ and https://www.nvidia.com/en-us/clara/developer-kits/
- Working adding support for layer configuration to build either for iGPU or dGPU.
- Discussed container based builds outlined at https://github.com/NVIDIA/meta-tegra-clara-holoscan-mgx/blob/main/env/README.md. The build container pulls in Devzone content from NVIDIA automatically, saves steps to download proprietary NVIDIA components.
- Discussed issues linking against GPL only symbols in Jetpack 5.
- Providing Ian with a list of customers and drivers which are encountering this issue will help prioritize internally.
- Discussed bringing Clara into meta-tegra.
- Agreed this would be possible in principle, would need to understand the plan for maintaining.
- Recipes for pytorch in development
- Getting torch-tensorrt compiler to run was a challenge. Lots of plugins which are difficult to build. Native torch gets used to compile things. Approach was to patch each one of the plugins so they didn’t need the native torch (for finding cuda, etc).
- Landed on building a native torch that didn’t need CUDA. Works well but compilation time explodes to around 30 minutes. Torch vision is working, lots of others which still need work to support.
- Doesn’t really belong in meta-tegra-community. Torch depends on scipy which needs Fortran, has special requirements to build within yocto layers.
- Have discussed making a meta-scientific or similar layer in openembedded. Don’t have an existing layer for this today. Windriver created meta-tensorflow layer, however this hasn’t been maintained.
- Kurt will kick off an effort to create a layer on openembedded for this in openembedded-architecture list.
- Dublin Openembedded hackathon coming up September 12th. See registration link here