You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
rv1106,a simple net ,it work well in the conversation step, but when i run rknnmodel on rv1106, it‘s outputs is different with pc。。。。。。一个简单的分类模型,onnx转rknn及在板子上运行都没有报错但是结果就是不对,
#333
Open
Calsia opened this issue
Aug 31, 2024
· 0 comments
this link is the net i uesd. when i convert it to rknn, and do accuracy_analysis, everything seems work well until i run it on rv1106.the results is different. what's wrong with the net?
Calsia
changed the title
rv1106,一个简单的分类模型,onnx转rknn及在板子上运行都没有报错但是结果就是不对,
rv1106,a simple net ,it work well in the conversation step, but when i run rknnmodel on rv1106, it‘s outputs is different with pc。。。。。。一个简单的分类模型,onnx转rknn及在板子上运行都没有报错但是结果就是不对,
Aug 31, 2024
https://github.com/HuKai97/YOLOv5-LPRNet-Licence-Recognition/blob/master/models/LPRNet.py
this link is the net i uesd. when i convert it to rknn, and do accuracy_analysis, everything seems work well until i run it on rv1106.the results is different. what's wrong with the net?
用的就是这个模型,在pc上进行onnx转rknn并测试、分析是没有问题的,但是转到板子上输出的结果也不报错,就是分类完全随机了
下面是我在pc转模型的log
I rknn-toolkit2 version: 2.1.0+708089d1
--> Config model
done
--> Loading model
I Loading : 100%|███████████████████████████████████████████████| 22/22 [00:00<00:00, 103796.05it/s]
done
--> Building model
D base_optimize ...
D base_optimize done.
D
D fold_constant ...
D fold_constant done.
D
D correct_ops ...
D correct_ops done.
D
D fuse_ops ...
D fuse_ops results:
D convert_squeeze_to_reshape: remove node = ['Squeeze_25'], add node = ['Squeeze_25_2rs']
D unsqueeze_to_4d_transpose: remove node = [], add node = ['onnx::Transpose_108_rs', 'output-rs']
D bypass_two_reshape: remove node = ['onnx::Transpose_108_rs', 'Squeeze_25_2rs']
D fold_constant ...
D fold_constant done.
D fuse_ops done.
D
D sparse_weight ...
D sparse_weight done.
D
I GraphPreparing : 100%|███████████████████████████████████████████| 27/27 [00:00<00:00, 562.96it/s]
I Quantizating : 100%|██████████████████████████████████████████████| 27/27 [00:00<00:00, 45.57it/s]
D
D quant_optimizer ...
D quant_optimizer results:
D adjust_relu: ['Relu_22', 'Relu_20', 'Relu_17', 'Relu_15', 'Relu_12', 'Relu_10', 'Relu_7', 'Relu_5', 'Relu_3', 'Relu_1']
D quant_optimizer done.
D
W build: The default input dtype of 'input' is changed from 'float32' to 'int8' in rknn model for performance!
Please take care of this change when deploy rknn model with Runtime API!
W build: The default output dtype of 'output' is changed from 'float32' to 'int8' in rknn model for performance!
Please take care of this change when deploy rknn model with Runtime API!
I rknn building ...
I RKNN: [20:28:03.666] compress = 0, conv_eltwise_activation_fuse = 1, global_fuse = 1, multi-core-model-mode = 7, output_optimize = 1, layout_match = 1, enable_argb_group = 0, pipeline_fuse = 1, enable_flash_attention = 0
I RKNN: librknnc version: 2.1.0 (967d001cc8@2024-08-07T11:32:45)
D RKNN: [20:28:03.667] RKNN is invoked
D RKNN: [20:28:03.673] >>>>>> start: rknn::RKNNExtractCustomOpAttrs
D RKNN: [20:28:03.673] <<<<<<<< end: rknn::RKNNExtractCustomOpAttrs
D RKNN: [20:28:03.673] >>>>>> start: rknn::RKNNSetOpTargetPass
D RKNN: [20:28:03.673] <<<<<<<< end: rknn::RKNNSetOpTargetPass
D RKNN: [20:28:03.673] >>>>>> start: rknn::RKNNBindNorm
D RKNN: [20:28:03.673] <<<<<<<< end: rknn::RKNNBindNorm
D RKNN: [20:28:03.673] >>>>>> start: rknn::RKNNEliminateQATDataConvert
D RKNN: [20:28:03.673] <<<<<<<< end: rknn::RKNNEliminateQATDataConvert
D RKNN: [20:28:03.673] >>>>>> start: rknn::RKNNTileGroupConv
D RKNN: [20:28:03.673] <<<<<<<< end: rknn::RKNNTileGroupConv
D RKNN: [20:28:03.673] >>>>>> start: rknn::RKNNTileFcBatchFuse
D RKNN: [20:28:03.673] <<<<<<<< end: rknn::RKNNTileFcBatchFuse
D RKNN: [20:28:03.673] >>>>>> start: rknn::RKNNAddConvBias
D RKNN: [20:28:03.673] <<<<<<<< end: rknn::RKNNAddConvBias
D RKNN: [20:28:03.673] >>>>>> start: rknn::RKNNTileChannel
D RKNN: [20:28:03.673] <<<<<<<< end: rknn::RKNNTileChannel
D RKNN: [20:28:03.673] >>>>>> start: rknn::RKNNPerChannelPrep
D RKNN: [20:28:03.673] <<<<<<<< end: rknn::RKNNPerChannelPrep
D RKNN: [20:28:03.673] >>>>>> start: rknn::RKNNBnQuant
D RKNN: [20:28:03.673] <<<<<<<< end: rknn::RKNNBnQuant
D RKNN: [20:28:03.673] >>>>>> start: rknn::RKNNFuseOptimizerPass
D RKNN: [20:28:03.673] <<<<<<<< end: rknn::RKNNFuseOptimizerPass
D RKNN: [20:28:03.673] >>>>>> start: rknn::RKNNTurnAutoPad
D RKNN: [20:28:03.673] <<<<<<<< end: rknn::RKNNTurnAutoPad
D RKNN: [20:28:03.673] >>>>>> start: rknn::RKNNInitRNNConst
D RKNN: [20:28:03.673] <<<<<<<< end: rknn::RKNNInitRNNConst
D RKNN: [20:28:03.673] >>>>>> start: rknn::RKNNInitCastConst
D RKNN: [20:28:03.673] <<<<<<<< end: rknn::RKNNInitCastConst
D RKNN: [20:28:03.673] >>>>>> start: rknn::RKNNMultiSurfacePass
D RKNN: [20:28:03.673] <<<<<<<< end: rknn::RKNNMultiSurfacePass
D RKNN: [20:28:03.673] >>>>>> start: rknn::RKNNReplaceConstantTensorPass
D RKNN: [20:28:03.673] <<<<<<<< end: rknn::RKNNReplaceConstantTensorPass
D RKNN: [20:28:03.673] >>>>>> start: rknn::RKNNSubgraphManager
D RKNN: [20:28:03.673] <<<<<<<< end: rknn::RKNNSubgraphManager
D RKNN: [20:28:03.673] >>>>>> start: OpEmit
D RKNN: [20:28:03.674] <<<<<<<< end: OpEmit
D RKNN: [20:28:03.674] >>>>>> start: rknn::RKNNAddFirstConv
D RKNN: [20:28:03.674] <<<<<<<< end: rknn::RKNNAddFirstConv
D RKNN: [20:28:03.674] >>>>>> start: rknn::RKNNLayoutMatchPass
D RKNN: [20:28:03.674] <<<<<<<< end: rknn::RKNNLayoutMatchPass
D RKNN: [20:28:03.674] >>>>>> start: rknn::RKNNAddSecondaryNode
D RKNN: [20:28:03.674] <<<<<<<< end: rknn::RKNNAddSecondaryNode
D RKNN: [20:28:03.674] >>>>>> start: rknn::RKNNAllocateConvCachePass
D RKNN: [20:28:03.674] <<<<<<<< end: rknn::RKNNAllocateConvCachePass
D RKNN: [20:28:03.674] >>>>>> start: OpEmit
D RKNN: [20:28:03.674] finish initComputeZoneMapByStepsVector
D RKNN: [20:28:03.674] finish initComputeZoneMapByStepsVector
D RKNN: [20:28:03.674] finish initComputeZoneMap
D RKNN: [20:28:03.674] <<<<<<<< end: OpEmit
D RKNN: [20:28:03.674] >>>>>> start: rknn::RKNNSubGraphMemoryPlanPass
D RKNN: [20:28:03.674] <<<<<<<< end: rknn::RKNNSubGraphMemoryPlanPass
D RKNN: [20:28:03.674] >>>>>> start: rknn::RKNNProfileAnalysisPass
D RKNN: [20:28:03.674] <<<<<<<< end: rknn::RKNNProfileAnalysisPass
D RKNN: [20:28:03.674] >>>>>> start: rknn::RKNNOperatorIdGenPass
D RKNN: [20:28:03.674] <<<<<<<< end: rknn::RKNNOperatorIdGenPass
D RKNN: [20:28:03.674] >>>>>> start: rknn::RKNNWeightTransposePass
W RKNN: [20:28:03.676] Warning: Tensor output-rs_i1 need paramter qtype, type is set to float16 by default!
W RKNN: [20:28:03.676] Warning: Tensor output-rs_i1 need paramter qtype, type is set to float16 by default!
D RKNN: [20:28:03.676] <<<<<<<< end: rknn::RKNNWeightTransposePass
D RKNN: [20:28:03.676] >>>>>> start: rknn::RKNNCPUWeightTransposePass
D RKNN: [20:28:03.676] <<<<<<<< end: rknn::RKNNCPUWeightTransposePass
D RKNN: [20:28:03.676] >>>>>> start: rknn::RKNNModelBuildPass
D RKNN: [20:28:03.676] <<<<<<<< end: rknn::RKNNModelBuildPass
D RKNN: [20:28:03.676] >>>>>> start: rknn::RKNNModelRegCmdbuildPass
D RKNN: [20:28:03.676] --------------------------------------------------------------------------------------------------------------------------------------------------------------------------
D RKNN: [20:28:03.676] Network Layer Information Table
D RKNN: [20:28:03.676] --------------------------------------------------------------------------------------------------------------------------------------------------------------------------
D RKNN: [20:28:03.676] ID OpType DataType Target InputShape OutputShape Cycles(DDR/NPU/Total) RW(KB) FullName
D RKNN: [20:28:03.676] --------------------------------------------------------------------------------------------------------------------------------------------------------------------------
D RKNN: [20:28:03.676] 0 InputOperator INT8 CPU \ (1,3,48,168) 0/0/0 0 InputOperator:input
D RKNN: [20:28:03.676] 1 ConvRelu INT8 NPU (1,3,48,168),(8,3,5,5),(8) (1,8,44,164) 22823/180400/180400 24 Conv:Conv_0
D RKNN: [20:28:03.676] 2 ConvRelu INT8 NPU (1,8,44,164),(8,8,3,3),(8) (1,8,44,164) 37697/64944/64944 114 Conv:Conv_2
D RKNN: [20:28:03.676] 3 ConvRelu INT8 NPU (1,8,44,164),(16,8,3,3),(16) (1,16,44,164) 37884/64944/64944 115 Conv:Conv_4
D RKNN: [20:28:03.676] 4 ConvRelu INT8 NPU (1,16,44,164),(16,16,3,3),(16) (1,16,44,164) 37884/64944/64944 115 Conv:Conv_6
D RKNN: [20:28:03.676] 5 MaxPool INT8 NPU (1,16,44,164) (1,16,22,82) 0/0/0 112 MaxPool:MaxPool_8
D RKNN: [20:28:03.676] 6 ConvRelu INT8 NPU (1,16,22,82),(32,16,3,3),(32) (1,32,22,82) 14848/32544/32544 32 Conv:Conv_9
D RKNN: [20:28:03.676] 7 ConvRelu INT8 NPU (1,32,22,82),(32,32,3,3),(32) (1,32,22,82) 20283/32544/32544 65 Conv:Conv_11
D RKNN: [20:28:03.676] 8 MaxPool INT8 NPU (1,32,22,82) (1,32,11,41) 0/0/0 56 MaxPool:MaxPool_13
D RKNN: [20:28:03.676] 9 ConvRelu INT8 NPU (1,32,11,41),(48,32,3,3),(48) (1,48,11,41) 8165/12528/12528 27 Conv:Conv_14
D RKNN: [20:28:03.676] 10 ConvRelu INT8 NPU (1,48,11,41),(48,48,3,3),(48) (1,48,11,41) 10458/25056/25056 41 Conv:Conv_16
D RKNN: [20:28:03.676] 11 MaxPool INT8 NPU (1,48,11,41) (1,48,5,20) 0/0/0 21 MaxPool:MaxPool_18
D RKNN: [20:28:03.676] 12 ConvRelu INT8 NPU (1,48,5,20),(64,48,3,3),(64) (1,64,5,20) 6391/8064/8064 32 Conv:Conv_19
D RKNN: [20:28:03.676] 13 ConvRelu INT8 NPU (1,64,5,20),(128,64,3,3),(128) (1,128,5,20) 15254/16128/16128 79 Conv:Conv_21
D RKNN: [20:28:03.676] 14 MaxPool INT8 NPU (1,128,5,20) (1,128,1,21) 0/0/0 12 MaxPool:MaxPool_23
D RKNN: [20:28:03.676] 15 Conv INT8 NPU (1,128,1,21),(78,128,1,1),(78) (1,78,1,21) 2434/640/2434 13 Conv:Conv_24
D RKNN: [20:28:03.676] 16 Transpose INT8 NPU (1,78,1,21) (1,21,1,78) 0/0/0 1 Transpose:Transpose_26
D RKNN: [20:28:03.676] 17 Reshape INT8 NPU (1,21,1,78),(3) (1,21,78) 0/0/0 2 Reshape:output-rs
D RKNN: [20:28:03.676] 18 OutputOperator INT8 CPU (1,21,78) \ 0/0/0 1 OutputOperator:output
D RKNN: [20:28:03.676] --------------------------------------------------------------------------------------------------------------------------------------------------------------------------
D RKNN: [20:28:03.676] <<<<<<<< end: rknn::RKNNModelRegCmdbuildPass
D RKNN: [20:28:03.676] >>>>>> start: rknn::RKNNFlatcModelBuildPass
D RKNN: [20:28:03.677] Export Mini RKNN model to /tmp/tmpqwtc_u17/check.rknn
D RKNN: [20:28:03.677] >>>>>> end: rknn::RKNNFlatcModelBuildPass
D RKNN: [20:28:03.677] >>>>>> start: rknn::RKNNMemStatisticsPass
D RKNN: [20:28:03.677] ----------------------------------------------------------------------------------------------------------------------------------------
D RKNN: [20:28:03.677] Feature Tensor Information Table
D RKNN: [20:28:03.677] ------------------------------------------------------------------------------------------------------+---------------------------------
D RKNN: [20:28:03.677] ID User Tensor DataType DataFormat OrigShape NativeShape | [Start End) Size
D RKNN: [20:28:03.677] ------------------------------------------------------------------------------------------------------+---------------------------------
D RKNN: [20:28:03.677] 1 ConvRelu input INT8 NC1HWC2 (1,3,48,168) (1,1,48,168,3) | 0x0002e040 0x00034340 0x00006300
D RKNN: [20:28:03.677] 2 ConvRelu onnx::Conv_75 INT8 NC1HWC2 (1,8,44,164) (1,1,44,164,16) | 0x00034340 0x00050640 0x0001c300
D RKNN: [20:28:03.677] 3 ConvRelu onnx::Conv_78 INT8 NC1HWC2 (1,8,44,164) (1,1,44,164,16) | 0x00050640 0x0006c940 0x0001c300
D RKNN: [20:28:03.677] 4 ConvRelu onnx::Conv_81 INT8 NC1HWC2 (1,16,44,164) (1,1,44,164,16) | 0x0002e040 0x0004a340 0x0001c300
D RKNN: [20:28:03.677] 5 MaxPool onnx::MaxPool_84 INT8 NC1HWC2 (1,16,44,164) (1,1,44,164,16) | 0x0004a340 0x00066640 0x0001c300
D RKNN: [20:28:03.677] 6 ConvRelu input.32 INT8 NC1HWC2 (1,16,22,82) (1,1,22,82,16) | 0x0002e040 0x00035100 0x000070c0
D RKNN: [20:28:03.677] 7 ConvRelu onnx::Conv_88 INT8 NC1HWC2 (1,32,22,82) (1,2,22,82,16) | 0x00035100 0x00043280 0x0000e180
D RKNN: [20:28:03.677] 8 MaxPool onnx::MaxPool_91 INT8 NC1HWC2 (1,32,22,82) (1,2,22,82,16) | 0x00043280 0x00051400 0x0000e180
D RKNN: [20:28:03.677] 9 ConvRelu input.52 INT8 NC1HWC2 (1,32,11,41) (1,2,11,41,16) | 0x0002e040 0x000318c0 0x00003880
D RKNN: [20:28:03.677] 10 ConvRelu onnx::Conv_95 INT8 NC1HWC2 (1,48,11,41) (1,3,11,41,16) | 0x000318c0 0x00036d80 0x000054c0
D RKNN: [20:28:03.677] 11 MaxPool onnx::MaxPool_98 INT8 NC1HWC2 (1,48,11,41) (1,3,11,41,16) | 0x00036d80 0x0003c240 0x000054c0
D RKNN: [20:28:03.677] 12 ConvRelu input.72 INT8 NC1HWC2 (1,48,5,20) (1,3,5,20,16) | 0x0002e040 0x0002f300 0x000012c0
D RKNN: [20:28:03.677] 13 ConvRelu onnx::Conv_102 INT8 NC1HWC2 (1,64,5,20) (1,4,5,20,16) | 0x0002f300 0x00030c00 0x00001900
D RKNN: [20:28:03.677] 14 MaxPool onnx::MaxPool_105 INT8 NC1HWC2 (1,128,5,20) (1,8,5,20,16) | 0x00030c00 0x00033e00 0x00003200
D RKNN: [20:28:03.677] 15 Conv input.92 INT8 NC1HWC2 (1,128,1,21) (1,9,1,21,16) | 0x0002e040 0x0002ec40 0x00000c00
D RKNN: [20:28:03.677] 16 Transpose onnx::Squeeze_107 INT8 NC1HWC2 (1,78,1,21) (1,5,1,21,16) | 0x0002ec40 0x0002f3c0 0x00000780
D RKNN: [20:28:03.677] 16 Transpose onnx::Squeeze_107_exSecondary INT8 NC1HWC2 (1,78,1,21) (1,15,1,21,16) | 0x0002f3c0 0x000307c0 0x00001400
D RKNN: [20:28:03.677] 17 Reshape output-rs INT8 NC1HWC2 (1,21,1,78) (1,2,1,78,16) | 0x0002e040 0x0002ea40 0x00000a00
D RKNN: [20:28:03.677] 17 Reshape output-rs_exSecondary INT8 NC1HWC2 (1,21,1,78) (1,5,1,78,16) | 0x0002ea40 0x000304b0 0x00001a70
D RKNN: [20:28:03.677] 18 OutputOperator output INT8 UNDEFINED (1,21,78) (1,21,78) | 0x000304c0 0x00030ec0 0x00000a00
D RKNN: [20:28:03.677] ------------------------------------------------------------------------------------------------------+---------------------------------
D RKNN: [20:28:03.677] -------------------------------------------------------------------------------------
D RKNN: [20:28:03.677] Const Tensor Information Table
D RKNN: [20:28:03.677] ---------------------------------------------------+---------------------------------
D RKNN: [20:28:03.677] ID User Tensor DataType OrigShape | [Start End) Size
D RKNN: [20:28:03.677] ---------------------------------------------------+---------------------------------
D RKNN: [20:28:03.677] 1 ConvRelu onnx::Conv_111 INT8 (8,3,5,5) | 0x00002980 0x00002cc0 0x00000340
D RKNN: [20:28:03.677] 1 ConvRelu onnx::Conv_112 INT32 (8) | 0x00002cc0 0x00002d40 0x00000080
D RKNN: [20:28:03.677] 2 ConvRelu onnx::Conv_114 INT8 (8,8,3,3) | 0x00002d40 0x000031c0 0x00000480
D RKNN: [20:28:03.677] 2 ConvRelu onnx::Conv_115 INT32 (8) | 0x000031c0 0x00003240 0x00000080
D RKNN: [20:28:03.677] 3 ConvRelu onnx::Conv_117 INT8 (16,8,3,3) | 0x00003240 0x00003b40 0x00000900
D RKNN: [20:28:03.677] 3 ConvRelu onnx::Conv_118 INT32 (16) | 0x00003b40 0x00003bc0 0x00000080
D RKNN: [20:28:03.677] 4 ConvRelu onnx::Conv_120 INT8 (16,16,3,3) | 0x00003bc0 0x000044c0 0x00000900
D RKNN: [20:28:03.677] 4 ConvRelu onnx::Conv_121 INT32 (16) | 0x000044c0 0x00004540 0x00000080
D RKNN: [20:28:03.677] 6 ConvRelu onnx::Conv_123 INT8 (32,16,3,3) | 0x00004540 0x00005740 0x00001200
D RKNN: [20:28:03.677] 6 ConvRelu onnx::Conv_124 INT32 (32) | 0x00005740 0x00005840 0x00000100
D RKNN: [20:28:03.677] 7 ConvRelu onnx::Conv_126 INT8 (32,32,3,3) | 0x00005840 0x00007c40 0x00002400
D RKNN: [20:28:03.677] 7 ConvRelu onnx::Conv_127 INT32 (32) | 0x00007c40 0x00007d40 0x00000100
D RKNN: [20:28:03.677] 9 ConvRelu onnx::Conv_129 INT8 (48,32,3,3) | 0x00007d40 0x0000b340 0x00003600
D RKNN: [20:28:03.677] 9 ConvRelu onnx::Conv_130 INT32 (48) | 0x0000b340 0x0000b4c0 0x00000180
D RKNN: [20:28:03.677] 10 ConvRelu onnx::Conv_132 INT8 (48,48,3,3) | 0x0000b4c0 0x000105c0 0x00005100
D RKNN: [20:28:03.677] 10 ConvRelu onnx::Conv_133 INT32 (48) | 0x000105c0 0x00010740 0x00000180
D RKNN: [20:28:03.677] 12 ConvRelu onnx::Conv_135 INT8 (64,48,3,3) | 0x00010740 0x00017340 0x00006c00
D RKNN: [20:28:03.677] 12 ConvRelu onnx::Conv_136 INT32 (64) | 0x00017340 0x00017540 0x00000200
D RKNN: [20:28:03.677] 13 ConvRelu onnx::Conv_138 INT8 (128,64,3,3) | 0x00017540 0x00029540 0x00012000
D RKNN: [20:28:03.677] 13 ConvRelu onnx::Conv_139 INT32 (128) | 0x00029540 0x00029940 0x00000400
D RKNN: [20:28:03.677] 15 Conv newCnn.weight INT8 (78,128,1,1) | 0x00000000 0x00002700 0x00002700
D RKNN: [20:28:03.677] 15 Conv newCnn.bias INT32 (78) | 0x00002700 0x00002980 0x00000280
D RKNN: [20:28:03.677] 17 Reshape output-rs_i1 INT64 (3) | 0x00029940*0x00029980 0x00000040
D RKNN: [20:28:03.677] ---------------------------------------------------+---------------------------------
D RKNN: [20:28:03.677] ----------------------------------------
D RKNN: [20:28:03.677] Total Internal Memory Size: 250.25KB
D RKNN: [20:28:03.677] Total Weight Memory Size: 166.375KB
D RKNN: [20:28:03.677] ----------------------------------------
D RKNN: [20:28:03.677] <<<<<<<< end: rknn::RKNNMemStatisticsPass
I rknn buiding done.
done
--> Export rknn model, ../model_pldr/lpr.rknn
done
--> Init runtime environment
I Target is None, use simulator!
done
--> Running model
I GraphPreparing : 100%|██████████████████████████████████████████| 29/29 [00:00<00:00, 2977.60it/s]
I SessionPreparing : 100%|████████████████████████████████████████| 29/29 [00:00<00:00, 2697.66it/s]
I GraphPreparing : 100%|██████████████████████████████████████████| 29/29 [00:00<00:00, 3108.64it/s]
I AccuracyAnalysing : 100%|█████████████████████████████████████████| 29/29 [00:00<00:00, 97.96it/s]
simulator_error: calculate the output error of each layer of the simulator (compared to the 'golden' value).
entire: output error of each layer between 'golden' and 'simulator', these errors will accumulate layer by layer.
single: single-layer output error between 'golden' and 'simulator', can better reflect the single-layer accuracy of the simulator.
layer_name simulator_error
entire single
cos euc cos euc
[Input] input 1.00000 | 0.0 1.00000 | 0.0
[exDataConvert] input_int8 1.00000 | 4.3257 1.00000 | 4.3257
[Conv] input.4
[Relu] onnx::Conv_75 0.99996 | 1.2561 0.99996 | 1.2561
[Conv] input.12
[Relu] onnx::Conv_78 0.99975 | 2.6626 0.99989 | 1.7904
[Conv] input.20
[Relu] onnx::Conv_81 0.99922 | 5.5223 0.99989 | 2.0373
[Conv] input.28
[Relu] onnx::MaxPool_84 0.99944 | 7.5979 0.99997 | 1.6664
[MaxPool] input.32 0.99963 | 3.9089 0.99999 | 0.5475
[Conv] input.40
[Relu] onnx::Conv_88 0.99883 | 4.9569 0.99992 | 1.2693
[Conv] input.48
[Relu] onnx::MaxPool_91 0.99928 | 4.6993 0.99996 | 1.1559
[MaxPool] input.52 0.99948 | 2.8090 0.99999 | 0.4502
[Conv] input.60
[Relu] onnx::Conv_95 0.99930 | 2.4040 0.99995 | 0.6363
[Conv] input.68
[Relu] onnx::MaxPool_98 0.99858 | 1.8201 0.99990 | 0.4723
[MaxPool] input.72 0.99909 | 1.3663 0.99997 | 0.2254
[Conv] input.80
[Relu] onnx::Conv_102 0.99876 | 1.3209 0.99992 | 0.3279
[Conv] input.88
[Relu] onnx::MaxPool_105 0.99662 | 5.5412 0.99984 | 1.1720
[MaxPool] input.92 0.99825 | 4.3240 0.99996 | 0.5895
[Conv] onnx::Squeeze_107 0.99947 | 21.785 0.99996 | 5.1620
[Transpose] output-rs 0.99947 | 21.785 0.99997 | 4.4925
[Reshape] output_int8 0.99947 | 21.785 0.99997 | 4.4925
[exDataConvert] output 0.99947 | 21.785 0.99997 | 4.4925
I The error analysis results save to: ./snapshot/error_analysis.txt
W accuracy_analysis: The mapping of layer_name & file_name save to: ./snapshot/map_name_to_file.txt
[ 5.7592 12.67024 5.7592 3.0715735 6.527094 7.6789336
16.893654 9.214721 1.5357867 1.9197334 6.527094 -3.0715735
1.5357867 10.366561 -1.5357867 1.9197334 11.902348 7.6789336
0.3839467 9.598667 13.438134 ]
(1, 21, 78) [ 6 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 10 0 0 0 0 0
0 0 20 0 0 0 0 0 15 15 13 14 18 1 20 0 18 18 18 16 18 15 17 8
7 19 18 20 3 18 3 17 18 18 8 0 20 16 6 18 3 18 20 3 3 18 18 16
18 18 18 17 20 9]
The text was updated successfully, but these errors were encountered: