Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversion from ONNX model fails with Relu #132

Open
victorromeo opened this issue Nov 30, 2020 · 3 comments
Open

Conversion from ONNX model fails with Relu #132

victorromeo opened this issue Nov 30, 2020 · 3 comments

Comments

@victorromeo
Copy link

Simple models with Relu fail when converting via ONNX frontend, as Relu shape and dtype information is missing (None).

May I please confirm if ONNX is intended to be supported?

frontend/onnx.python

... # [line277]
  @staticmethod
  def _handle_relu(op_info):
    input_tensor = op_info.input_tensors[0]
    op_info.output_tensors[0].dtype = input_tensor.dtype
    op_info.output_tensors[0].shape = input_tensor.shape[:]
...
@dboyliao
Copy link
Member

Yes. It is.
And it is our plan to support PyTorch.
We haven't use onnx for a while, but I'm pretty sure it was working.
Maybe there is sth I miss because of some updates in PyTorch or ONNX.

Can you attach your model file here so I can take a look on what the bug is?

@victorromeo
Copy link
Author

I've added an example here #133 using Keras2onnx to convert a basic MNIST keras model. In this example, the tests/test_keras_onnx_model fails in _handle_relu static method when assigning the shape.

I believe the _build_graph operataion in ONNX is ultimately responsible for not transferring the shape or dtype information correctly. May I get your thoughts?

My goal is to get transpose + avg pool operations supported this week if possible on my own dev branches, and our preference was to avoid tflite micro.

Thanks.

@victorromeo
Copy link
Author

I believe the operation _build_intermediate_ops needs shape and dtype information, but its hard coded to None.

def _build_intermediate_ops(self, onnx_graph, ugraph, op_types_cnt, tensor_names_map):
    """Build all intermediate nodes, the nodes that is not in neither initialization list nor input list
    """
    # create all outupt tensors
    for node in onnx_graph.node:
      cnt = op_types_cnt[node.op_type]
      node_name = self._format_node_name(node.name, node.op_type, cnt)
      op_types_cnt[node.op_type] += 1
      for i, name in enumerate(node.output):
        tensor_names_map[name] = TensorInfo(
          name=self._format_tensor_name(name, node_name, i),
          op_name=node_name,
          dtype=None, # <== Should be set/calculated?
          shape=None, # <== Should be set/calculated?
          ugraph=ugraph
        )
    # create ops
    for node in onnx_graph.node:
      input_tensors = [
        tensor_names_map[name] for name in node.input
      ]
      output_tensors = [
        tensor_names_map[name] for name in node.output
      ]
      op_attr = {
        attrib_pb.name: _convert_op_attribute(attrib_pb)
        for attrib_pb in node.attribute
      }
      node_name = output_tensors[0].op_name
      OperationInfo(
        name=node_name,
        input_tensors=input_tensors,
        output_tensors=output_tensors,
        op_type=node.op_type,
        lib_name='onnx',
        ugraph=ugraph,
        op_attr=op_attr
      )

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants