You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm having trouble with recognition of objects on iPhone X (and iPhone 6s as I tested it) using the YOLOv3-CoreML. Not changing anything in the code gives weird results as you can see in the picture below:
Using the predict() function, it appears that iPhone X recognizes objects like the umbrella and others with 100% confidence. Changing confidenceThreshold and iouThreshold doesn't give any effect, as well as changing maxBoundingBoxes to 1-5. Using predictUsingVision does not produce any prediction at all on the screen.
The same problem I get with tiny-YOLOv3, YOLOv2 with custom and original models from Darknet using iPhone X.
However, using the same code on iPhone 6 (not s) produce an absolutely opposite result. It can predict objects successfully (using predict() function) with my custom model and original Darknet models:
I assumed that there is a problem with GPU changes Apple did start with iPhone 6s devices, but I haven't found any information about that. Does anybody have an issue like this? Did anybody try using the code on iPhone X device? I explicitly decided to test the code in the repository without any changes to provide a demonstration, the problem isn't about YOLOv3 or tiny-YOLOv3 or even YOLOv2 - the result is the same for me using any YOLO version.
You can find videos demonstrating the work on iPhone 6 and iPhone X:
I will appreciate any help and assumptions you can provide, but as far as I tried to solve the problem for two weeks I didn't get the result, I wonder if there a positive cases using the code on iPhone X by other people.
The text was updated successfully, but these errors were encountered:
I'm having trouble with recognition of objects on iPhone X (and iPhone 6s as I tested it) using the YOLOv3-CoreML. Not changing anything in the code gives weird results as you can see in the picture below:

Using the predict() function, it appears that iPhone X recognizes objects like the umbrella and others with 100% confidence. Changing confidenceThreshold and iouThreshold doesn't give any effect, as well as changing maxBoundingBoxes to 1-5. Using predictUsingVision does not produce any prediction at all on the screen.
The same problem I get with tiny-YOLOv3, YOLOv2 with custom and original models from Darknet using iPhone X.
However, using the same code on iPhone 6 (not s) produce an absolutely opposite result. It can predict objects successfully (using predict() function) with my custom model and original Darknet models:

I assumed that there is a problem with GPU changes Apple did start with iPhone 6s devices, but I haven't found any information about that. Does anybody have an issue like this? Did anybody try using the code on iPhone X device? I explicitly decided to test the code in the repository without any changes to provide a demonstration, the problem isn't about YOLOv3 or tiny-YOLOv3 or even YOLOv2 - the result is the same for me using any YOLO version.
You can find videos demonstrating the work on iPhone 6 and iPhone X:
iPhone X: https://imgur.com/a/EZOpr1W
iPhone 6: https://imgur.com/a/qQZtJAd
I will appreciate any help and assumptions you can provide, but as far as I tried to solve the problem for two weeks I didn't get the result, I wonder if there a positive cases using the code on iPhone X by other people.
The text was updated successfully, but these errors were encountered: