-
-
Notifications
You must be signed in to change notification settings - Fork 16.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fail to export CoreML model with decode layer and NMS #9694
Comments
@Qwin Ultralytics HUB exports YOLOv5 pipelined CoreML models with NMS etc. See https://hub.ultralytics.com/ |
@glenn-jocher thank you so much it worked!!! Is the export code for iOS opensource that the hub is using and might I take a look at it. I just want to see what I was doing wrong and what the difference is between the script I used from the blog and the hub used. I know I exported it using export.py (which my guess the ultralytics hub does the same) but then it uses a custom NMS code to get iOS values out of it, and I am curious as to what its doing to manipulate the model. I have a feeling its because the script above was using 2 matrices while the script the hub is using as input is 1 matrix to calculate the missing NMS layer. |
Dear Qwin I also had same trouble trained model with custom dataset, and reported on "Export.py need train mode export for cormel model to add NMS and decode layer #9667". -model.eval() def run( +parser.add_argument('--train', action='store_true', help='model.train() mode') Then I can export three outputs array. I can add NMS and detect layer as we use as previous. |
@junmcenroe here is the weird part I tried the exact same thing, thinking the training mode (considering the export.py was changed to remove --train) would be the one that was causing issues. However when I tried patching the python script and adding the training mode, I still got as result 1 output matrix though. I must have done something wrong with patching it! but thank you so much now I know at least WHY it was failing to add the NMS code and decode layer. |
following is my modified export.py. for your ref. |
@junmcenroe Hi, I have export my trained best.pt to best.mlmodel successful but without NMS, and there are four outputs right now, could you please help with how to add the NMS and detect layer as you said as previous. |
@zhaoqier builder.add_scale(name=f"normalize_coordinates_{outputName}", input_name=f"{outputName}_raw_coordinates", builder.set_output(output_names=["raw_confidence", "raw_coordinates"], output_dims=[ pipeline = ct.models.pipeline.Pipeline(input_features=[("image", ct.models.datatypes.Array(3, 460, 460)), note:(3,460,460) should be (3,640,640) I think Option-3) Use https://github.com/mshamash/yolov5 repo which including NMS |
Hi @junmcenroe 👍 Anyway, thank you again sincerely! |
Dear @zhaoqier per my previous experiment, no result or inaccurate position will be caused by following reason.
Apple provided the sample code as follows. I started from this code, and now working well even if I confused the video output orientation at first. I hope this info is useful for you. |
Dear @junmcenroe : Thanks for your reply and your suggestion is absolutely useful for me. At firstly, I thought it was the problem with my .mlmodel but finally I found it was caused by the coordinates transformation. I have solved it and now the performance in Live Capture is great. Thanks for your help again. Best Regards, Kier |
Dear @zhaoqier Good news. |
I adapted the PR #7263. And I can make it work. But it's strange. Anyone knows why the confidence shown in Xcode is always 100%? I can tune the I checked and my model does not output a conf of 1 for those images. Any help would be greatly appreciated! |
Dear @philipperemy If your result is by your own custom trained model , I think likely happen 100% conf result when the image for preview is the one of the images using train phase. If you try the quite different background image for preview, some different result might be happened. Or after you will try the following repo, and if you get good result, your implementation might be something wrong. git clone https://github.com/mshamash/yolov5 |
@glenn-jocher Thanks but I'd like to avoid using HUB. It should be possible have the same output with the export command of this repository. Or is there a quick way with HUB where I can upload this .pt checkpoint file and convert it to a .mlmodel model? |
@junmcenroe Even on the test set, I always get 100% for each prediction. When I run the branch
The last commit was on Sat Apr 9 12:51:04. So I guess this fork is too old if we consider the latest commit of the main repo. I might have to checkout an old commit on the main repo around April. |
I tried as following and get the same result of HUB |
In case you haven't sorted that error out there's not much to it. You trained a custom model recently and are trying to use an export.py that is ~400 commits behind. In order to make it work you need to go into models/yolo.py in that old branch and add the DetectionModel class from a models/yolo.py on master. (Maybe you can just checkout the file from the master branch but I'm not sure, haven't tried.) The reason others are saying it works on yolov5s.pt is because that checkpoint is super old and doesn't rely on this class. |
👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs. Access additional YOLOv5 🚀 resources:
Access additional Ultralytics ⚡ resources:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed! Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐! |
@philipperemy i know this is very old but still an issue today. I'm wondering if there is an implementation issue with PSA: the screenshot @glenn-jocher posted with accurate confidence is a Neural Network, not a MLProgram with nms pipeline, which i have never seen work in ultralytics/ultralytics' exporter. Looking at yolo5, it uses a slightly different implementation Line 93 in 6420a1d
Also trying to step through old PRs and compare implementation from other projects like Deci-AI/super-gradients#1333 |
Hello everyone,
I been working on this for days now without avail, so was wondering if anyone here could help me out. I first trained up a model using the following dataset on kaggle : https://www.kaggle.com/datasets/taranmarley/sptire
I used the colab script on this github to train my model. It works flawlessly I can even see perfect detection of my tires with detect.
Now here is where the problem comes, when I try to export the model to coreML everything goes well however I get only 1 output out of the model (which my guess is one matrix with the results):
after it has been exported it runs a decode layer on the spec and adds NMS. Which both fail with the following error:
![image](https://user-images.githubusercontent.com/1026038/193922873-22fe17c9-81ed-4b73-a63a-dec618506665.png)
My guess is that the output is incorrect of the model or that it has changed and the script that I am using to export it is outdated and still expects 2 outputs while there is only 1. If anyone here could help me just getting my yolov5 model converted correctly to coreml I would be really grateful.
Here is the export script that I am running:
https://colab.research.google.com/drive/1uR738UTlzI7apqeN0qr6mQ5ke_a5SKa8?usp=sharing
Here is the blog that I followed to run this script:
https://rockyshikoku.medium.com/convert-yolov5-to-coreml-also-add-a-decode-layer-113408b7a848
P.S. Please let me know if additional info is needed.
The text was updated successfully, but these errors were encountered: