-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No CoreML backend??? #7
Comments
There should be no need to install multiple versions of onnxruntime. Either you install the After correct installation, try to check what kind of providers are available: import onnxruntime as rt
rt.get_available_providers() |
@cansik Thank you Thank you very much, I have changed according to your form, but I am currently not sure whether to go CoreML, because the reasoning onnx is still very slow, no change logcat: 2023-07-25 15:13:37.023599 [I:onnxruntime:, inference_session.cc:263 operator()] Flush-to-zero and denormal-as-zero are off ..... I don't know where the problem is, the speed is very slow, each inference needs: 0.352567195892334* 1000 ≈ 305.686 milliseconds Thanks in advance! 提前感谢! |
Could it be because of the supported ops for the CoreML backend? If they are not implemented, the CPU is used, which can lead to a fragmentation of the calculation and data. |
When I install this dependency, I got this error. what is a solution?
|
@heliumsoft Instead of writing into a random issue, please read the FAQ first. If there are still questions, open an issue where you describe the problem, the environment you are working in and what you have already tried. |
@cansik Thank you. Do you have experience with installing onnxruntime-coreml? Also, can we use onnxruntime-silicon on MacBook Pro Intel Core i7? |
chatGPT thinks so too: From your log file, it appears that the PRelu and Pad operations are not supported by the CoreML Execution Provider on your device, as indicated by the logs stating they are supported [0]. These operations fall back to the CPU, which can significantly slow down the inference process. Here are the specific unsupported operations mentioned in the logs: PRelu: All PRelu operations listed in the logs are marked as unsupported by CoreML. It appears that the slope input for PRelu must either have a shape of [C, 1, 1] or be a single value to be supported, which is not the case in your model. **To address these unsupported operations: Modify or Replace Unsupported Operations:** If possible, modify the PRelu layers to meet CoreML's requirements or replace them with other activation functions that are supported by CoreML, such as ReLU. For the Pad operations, consider adjusting the network architecture to reduce or eliminate the need for padding, or adjust the padding parameters to meet the supported configurations if detailed in the CoreML documentation. Custom Layers: For operations that are essential and cannot be replaced or removed, consider implementing custom layers. However, be aware that this approach requires additional development and may impact performance. Model Conversion and Simplification: Convert the model to a format more naturally compatible with iOS/macOS development, such as directly using CoreML model format, which might allow more control over how these operations are handled. Improving compatibility with CoreML can help leverage the full computational capabilities of Apple's hardware, potentially leading to significant performance improvements in your model's inference time. |
@angewandte-codinglab Please do not copy/paste what chat-gpt "thinks". Answer questions with your own knowledge and experience. |
I am doing a face model
https://github.com/iperov/DeepFaceLive/releases/download/ZAHAR_LUPIN/Zahar_Lupin.dfm
environment:
onnxruntime-coreml == 1.13.1
onnxruntime-silicon == 1.13.1
device : Apple Silicon M1
python:
import onnx
import onnxruntime as rt
options = rt.SessionOptions()
options.log_severity_level = 0
options.intra_op_num_threads = 4
options.execution_mode = rt.ExecutionMode.ORT_SEQUENTIAL
options.graph_optimization_level = rt.GraphOptimizationLevel.ORT_ENABLE_ALL
onnx
onnxSession = rt.InferenceSession(onnx_model_path, providers=[ {"CoreMLExecutionProvider"}], options)
logcat:
EP Error using [{'CoreMLExecutionProvider'}]
Falling back to ['CPUExecutionProvider'] and retrying.
2023-07-25 11:11:48.215511 [I:onnxruntime:, inference_session.cc:263 operator()] Flush-to-zero and denormal-as-zero are off
2023-07-25 11:11:48.215528 [I:onnxruntime:, inference_session.cc:271 ConstructorCommon] Creating and using per session threadpools since use_per_session_threads_ is true
2023-07-25 11:11:48.215535 [I:onnxruntime:, inference_session.cc:292 ConstructorCommon] Dynamic block base set to 0
EP Error using [{'CoreMLExecutionProvider'}]
Falling back to ['CPUExecutionProvider'] and retrying.
2023-07-25 11:11:48.215793 [I:onnxruntime:, inference_session.cc:263 operator()] Flush-to-zero and denormal-as-zero are off
2023-07-25 11:11:48.215804 [I:onnxruntime:, inference_session.cc:271 ConstructorCommon] Creating and using per session threadpools since use_per_session_threads_ is true
2023-07-25 11:11:48.215810 [I:onnxruntime:, inference_session.cc:292 ConstructorCommon] Dynamic block base set to 0
EP Error using [{'CoreMLExecutionProvider'}]
Falling back to ['CPUExecutionProvider'] and retrying.
2023-07-25 11:11:48.222091 [I:onnxruntime:, inference_session.cc:263 operator()] Flush-to-zero and denormal-as-zero are off
2023-07-25 11:11:48.222108 [I:onnxruntime:, inference_session.cc:271 ConstructorCommon] Creating and using per session threadpools since use_per_session_threads_ is true
2023-07-25 11:11:48.222116 [I:onnxruntime:, inference_session.cc:292 ConstructorCommon] Dynamic block base set to 0
2023-07-25 11:11:48.290081 [I:onnxruntime:, inference_session.cc:1222 Initialize] Initializing session.
2023-07-25 11:11:48.293215 [I:onnxruntime:, reshape_fusion.cc:42 ApplyImpl] Total fused reshape node count: 0
2023-07-25 11:11:48.295481 [V:onnxruntime:, selector_action_transformer.cc:129 MatchAndProcess] Matched Conv
2023-07-25 11:11:48.295512 [V:onnxruntime:, selector_action_transformer.cc:129 MatchAndProcess] Matched Conv
2023-07-25 11:11:48.295524 [V:onnxruntime:, selector_action_transformer.cc:129 MatchAndProcess] Matched Conv
2023-07-25 11:11:48.295533 [V:onnxruntime:, selector_action_transformer.cc:129 MatchAndProcess] Matched Conv
2023-07-25 11:11:48.295540 [V:onnxruntime:, selector_action_transformer.cc:129 MatchAndProcess] Matched Conv
2023-07-25 11:11:48.295547 [V:onnxruntime:, selector_action_transformer.cc:129 MatchAndProcess] Matched Conv
2023-07-25 11:11:48.295555 [V:onnxruntime:, selector_action_transformer.cc:129 MatchAndProcess] Matched Conv
2023-07-25 11:11:48.295562 [V:onnxruntime:, selector_action_transformer.cc:129 MatchAndProcess] Matched Conv
2023-07-25 11:11:48.295569 [V:onnxruntime:, selector_action_transformer.cc:129 MatchAndProcess] Matched Conv
2023-07-25 11:11:48.295577 [V:onnxruntime:, selector_action_transformer.cc:129 MatchAndProcess] Matched Conv
2023-07-25 11:11:48.295584 [V:onnxruntime:, selector_action_transformer.cc:129 MatchAndProcess] Matched Conv
2023-07-25 11:11:48.295591 [V:onnxruntime:, selector_action_transformer.cc:129 MatchAndProcess] Matched Conv
2023-07-25 11:11:48.295599 [V:onnxruntime:, selector_action_transformer.cc:129 MatchAndProcess] Matched Conv
2023-07-25 11:11:48.295606 [V:onnxruntime:, selector_action_transformer.cc:129 MatchAndProcess] Matched Conv
2023-07-25 11:11:48.295613 [V:onnxruntime:, selector_action_transformer.cc:129 MatchAndProcess] Matched Conv
2023-07-25 11:11:48.295620 [V:onnxruntime:, selector_action_transformer.cc:129 MatchAndProcess] Matched Conv
2023-07-25 11:11:48.295626 [V:onnxruntime:, selector_action_transformer.cc:129 MatchAndProcess] Matched Conv
2023-07-25 11:11:48.295634 [V:onnxruntime:, selector_action_transformer.cc:129 MatchAndProcess] Matched Conv
2023-07-25 11:11:48.296167 [V:onnxruntime:, session_state.cc:1010 VerifyEachNodeIsAssignedToAnEp] Node placements
2023-07-25 11:11:48.296189 [V:onnxruntime:, session_state.cc:1013 VerifyEachNodeIsAssignedToAnEp] All nodes placed on [CPUExecutionProvider]. Number of nodes: 69
2023-07-25 11:11:48.296278 [V:onnxruntime:, session_state.cc:66 CreateGraphInfo] SaveMLValueNameIndexMapping
2023-07-25 11:11:48.296307 [V:onnxruntime:, session_state.cc:112 CreateGraphInfo] Done saving OrtValue mappings.
2023-07-25 11:11:48.296561 [I:onnxruntime:, session_state_utils.cc:199 SaveInitializedTensors] Saving initialized tensors.
2023-07-25 11:11:48.297107 [I:onnxruntime:, session_state_utils.cc:342 SaveInitializedTensors] Done saving initialized tensors
###############
The onruntime-silicon has been installed, but it only supports the CPU backend, not the CoreML backend, I don’t know the reason, can you help me.
Thanks in advance!
The text was updated successfully, but these errors were encountered: