- https://leimao.github.io/blog/ONNX-Runtime-CPP-Inference/
- https://github.com/cassiebreviu/cpp-onnxruntime-resnet-console-app
- https://github.com/k2-gc/onnxruntime-cpp-example
- https://github.com/Rohithkvsp/OnnxRuntimeAndorid
- https://github.com/ifzhang/ByteTrack/blob/main/deploy/ONNXRuntime/onnx_inference.py
- https://huggingface.co/models?sort=trending&search=onnx
- https://neuml.github.io/txtai/pipeline/train/hfonnx/
- https://docs.ultralytics.com/modes/export/#arguments
- https://github.com/microsoft/onnxruntime/tree/main/onnxruntime/python/tools/quantization
- https://onnxruntime.ai/docs/performance/model-optimizations/float16.html
- https://github.com/microsoft/onnxruntime-inference-examples/tree/main/quantization
- https://www.youtube.com/watch?v=Z0n5aLmcRHQ
- https://github.com/cyrusbehr/YOLOv8-TensorRT-CPP
- https://github.com/cyrusbehr/tensorrt-cpp-api
- https://github.com/mattiasbax/yolo-pose_cpp
- https://github.com/triple-Mu/YOLOv8-TensorRT (Python + C++ + TensorRT)
- https://github.com/NVIDIA/TensorRT/tree/main/samples/trtexec
- https://github.com/FourierMourier/yolov8-onnx-cpp/tree/main (Python + C++)
- https://github.com/mallumoSK/yolov8/blob/master/yolo/YoloPose.cpp
- https://github.com/triple-Mu/YOLOv8-TensorRT/blob/main/csrc/pose/normal/main.cpp
- https://github.com/Amyheart/yolo-onnxruntime-cpp
- https://github.com/UNeedCryDear/yolov8-opencv-onnxruntime-cpp
- https://github.com/ultralytics/ultralytics/tree/main/examples/YOLOv8-ONNXRuntime-CPP
- https://github.com/hpc203/yolov6-opencv-onnxruntime/tree/main
- https://github.com/hpc203/yolov5_pose_opencv
- ultralytics/ultralytics#1852
- ultralytics/yolov5#916
- https://zhuanlan.zhihu.com/p/466677699
- https://github.com/hpc203?tab=repositories
- https://velog.io/@dnchoi/ONNX-runtime-install
- https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html
- https://onnxruntime.ai/docs/api/python/on_device_training/training_artifacts.html
- https://pytorch.org/tutorials/beginner/onnx/onnx_registry_tutorial.html
- https://github.com/NVIDIA/TensorRT/tree/main/samples/trtexec
- https://onnxruntime.ai/docs/reference/compatibility.html
- https://github.com/onnx/onnx/blob/main/docs/Versioning.md
- https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html
- https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html#requirements
- https://gitee.com/arnoldfychen/onnxruntime/blob/master/docs/execution_providers/TensorRT-ExecutionProvider.md#specify-tensorrt-engine-cache-path
- https://github.com/PaddlePaddle/FastDeploy
- https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/en/build_and_install/download_prebuilt_libraries.md
- https://neuralmagic.com/blog/benchmark-yolov5-on-cpus-with-deepsparse/
- https://github.com/neuralmagic/sparseml/tree/main
- https://github.com/neuralmagic/deepsparse
- https://github.com/tucan9389/SemanticSegmentation-CoreML
- https://github.com/john-rocky/CoreML-Models#u2net
- https://github.com/likedan/Awesome-CoreML-Models
- https://github.com/SwiftBrain/awesome-CoreML-models
- https://github.com/PeterL1n/RobustVideoMatting
- https://coremltools.readme.io/docs/pytorch-conversion
- https://github.com/hollance/CoreMLHelpers
- https://developer.apple.com/machine-learning/api/
- https://github.com/vladimir-chernykh/coreml-performance
- https://github.com/apple/ml-4m/
- https://github.com/apple/ml-stable-diffusion
- https://huggingface.co/stabilityai/stable-diffusion-2-base [1]
- https://github.com/Stability-AI/stablediffusion [1]
- https://huggingface.co/CompVis/stable-diffusion-v1-4 [2]
- https://github.com/runwayml/stable-diffusion
- https://github.com/AUTOMATIC1111/stable-diffusion-webui
To provide an AI-Driven Optimizer to make Deep Neural Networks:
- faster,
- smaller,
- energy-efficient
- from cloud to edge computing
- without compromising accuracy