Skip to content

yutaokuyama/ofxOrt

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

64 Commits
 
 
 
 
 
 
 
 

Repository files navigation

ofxOrt

Thin wrapper of ONNX runtime for openFrameworks.
Image from Gyazo

Note

This addon is still in an experimental stage, and I'm not skilled developer, so I'll make breaking changes frequently :(

Dependencies

Prepare

Install ONNX runtime by following the document.

Environment

Author's environment is

  • Windows10
  • NVIDIA Driver:462.80
  • CUDA 11.0.221
  • cuDNN: 8.0.2.39
  • Visual studio 2019
  • oF:11.0

I think it may also work in other environments, such as Linux and Mac, but I have not tested.(I'll be verifying this soon.)

Usage

Prepare ONNX model and copy it to /bin

Windows

If you use CUDA on windows, please copy onnxruntime_providers_cuda.dll and onnxruntime_providers_shared.dll to /bin.

//First, create ofxOrt instance
ofxOrt ofxOrt(ORT_TSTR("model.name");

//Create input and output tensors
auto memory_info = Ort::MemoryInfo::CreateCpu(OrtDeviceAllocator, OrtMemTypeCPU);
ofxOrtImageTensor<float> input_tensor(memory_info, content.getTexture());
ofxOrtImageTensor<float> output_tensor(memory_info, numChannels, outWidth, outHeight, true);

//Run inference
ort.forward(Ort::RunOptions{ nullptr }, input_names, &(input_tensor.getTensor()), 1, output_names, &(output_tensor.getTensor()), 1);

References

ONNX Runtime Inference Examples onnx_runtime_cpp

License

ofxOrt is under MIT license.

About

Thin wrapper of ONNX runtime for openFrameworks

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages