-
Notifications
You must be signed in to change notification settings - Fork 647
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add torch ops for d2go models #1509
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR. These changes look good. I just had a couple of questions.
Before we can merge it. We need unit tests for the bug fixes and the new ops. @dncnbuck - can you add that?
It also looks like there is a merge conflict.
context.add(out, torch_name=node.name) | ||
|
||
@register_torch_op(torch_alias=["__and_", '__and__']) | ||
def logicaland(context, node): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
logical_and
was already implemented.
We need this function?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@fukatani Awesome! Thanks, I've removed the op and used the logical_and version and added the torch alias there.
A note on the alias __and_
. Within the model the RPN requires the op
item = torch.__and__(torch.gt(widths0, 0.), torch.gt(heights0, 0.))
However convert_nodes
assumes that ops ending in _
are inplace ops and removes the trailing _
before op lookup.
op_lookup = node.kind
if op_lookup.endswith("_"):
# This is an "in place" op.
# Look up the standard op instead by removing underscore.
op_lookup = op_lookup[:-1]
add_op = _TORCH_OPS_REGISTRY.get(op_lookup, None)
Maybe I should just patch this issue instead and leave only the alias __and__
.
I'll think it's probably best to leave it out of this PR for now and open another PR specifically to patch that issue with convert_nodes.
something like
add_op = _TORCH_OPS_REGISTRY.get(op_lookup, None)
if add_op is None and op_lookup.endswith("_"):
# This is an "in place" op.
# Look up the standard op instead by removing underscore.
op_lookup = op_lookup[:-1]
add_op = _TORCH_OPS_REGISTRY.get(op_lookup, None)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Making this change as a separate PR sounds good.
Could someone provide a toy network with a __and__
layer?
@TobyRoseman "We need unit tests for the bug fixes and the new ops. @dncnbuck - can you add that?" Yeah good point. I'll be able to add these but this might not happen today sadly. I'll try to get to it this week 😄 |
No worries about the timing. Let me know if you have any questions when adding the unit tests. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you should update to the latest code, as I don't think this can be merged atm without the update to the latest code.
Fix a bug when destructing coreml model (apple#1515)
Co-authored-by: Toby Roseman <[email protected]>
Co-authored-by: Toby Roseman <[email protected]>
* Update ---feature-request.md * Update ---feature-request.md
* Torch eq and ne ops supports bool type. * Addressed review comment
@TobyRoseman sorry this dropped off of my radar these last months. I've updated the Pr to include some tests for ops and removed the repeat interleave op implementation for the time being as it was not implemented well enough so I will hold out on adding that until I can give it some time |
@@ -3398,6 +3405,18 @@ def _slice(context, node): | |||
context.add(res) | |||
|
|||
|
|||
def _num_splits_and_sizes(split_sizes): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this just be an inner method of the split
method?
@register_torch_op | ||
def scatter_add(context, node): | ||
inputs = _get_inputs(context, node) | ||
_scatter(context, inputs, 'add', node.name) | ||
|
||
@register_torch_op | ||
def roi_align(context, node): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are there unit tests for this method?
@dncnbuck - no worries about this falling off your radar. Thanks for getting back to it. Please rebase this pull request on top of |
Has this PR been included? i.e. is it possible to convert d2go models to coreml? |
+1 would love to know the status on this PR, adding these ops would be a huge benefit to me |
So has anyone successfully converted a detectron2 / d2go model to coreml? I have been struggling with this as using cocoapods are not ideal for me. |
added torch ops
ops with minor patches
these were added in order to make progress with a conversion of
pytorch Mask-RCNN model from d2go / detectron2.
Form more details see: #1505 (comment)