Skip to content

Commit

Permalink
Adding models from Arm internal zoo
Browse files Browse the repository at this point in the history
Adding efficientnet_lite0_224,
yolov3_416_416_backbone_mltools_int8 and
yolov3_tiny_int8_pruned_backbone_only
from Arm internal zoo to public zoo models/experimental.
  • Loading branch information
johan-alfven-arm committed Jul 19, 2022
1 parent b898b17 commit 8781b67
Show file tree
Hide file tree
Showing 6 changed files with 119 additions and 0 deletions.
37 changes: 37 additions & 0 deletions models/experimental/efficientnet_lite0_224/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
# image_classification/efficientnet_lite0_224/tflite_int8

## Description
This work is developed from the codebase located [here](https://github.com/tensorflow/tpu/blob/master/models/official/efficientnet/lite/README.md) and is under an Apache 2 license available [here](https://github.com/tensorflow/tpu/blob/master/LICENSE).

The original networks, which we have optimized via tooling but left otherwise unchanged are copyright the tensorflow authors as in the license file linked.

## License
[Apache-2.0](https://spdx.org/licenses/Apache-2.0.html)

## Network Information
| Network Information | Value |
|---------------------|-------|
| Framework | TensorFlow Lite |
| SHA-1 Hash | 35f9dafaf25f8abf2225265b0724979a68bf6d67 |
| Size (Bytes) | 5422760 |
| Provenance | https://storage.googleapis.com/cloud-tpu-checkpoints/efficientnet/lite/efficientnet-lite0.tar.gz |
| Paper | https://arxiv.org/pdf/1905.11946.pdf |


## Accuracy
Dataset: ILSVRC 2012

| Metric | Value |
|--------|-------|
| top_1_accuracy | 0.744 |

## Network Inputs
| Input Node Name | Shape | Example Path | Example Type | Example Use Case |
|-----------------|-------|--------------|------------------|--------------|
| images | (1, 224, 224, 3) | models/image_classification/efficientnet_lite0_224/tflite_int8/testing_input | | Typical ImageNet-style single-batch cat resized to 224x224. |

## Network Outputs
| Output Node Name | Shape | Description |
|------------------|-------|-------------|
| Softmax | (1, 1000) | Probability distribution over 1000 ImageNet classes with uint8 values. |

Binary file not shown.
38 changes: 38 additions & 0 deletions models/experimental/yolov3_416_416_backbone_mltools_int8/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
# object_detection/yolo_v3_backbone_mltools/tflite_int8

## Description
Backbone of the Yolo v3 model with an input size of 416 x 416. The backbone is quantized with an int8 precision using the first 1000 images of the COCO 2014 training set for calibration. The DarkNet original pre-trained weights are used as initial weights.

## License
[MIT](https://spdx.org/licenses/MIT.html)
[MIT]https://github.com/zzh8829/yolov3-tf2/blob/master/LICENSE

## Network Information
| Network Information | Value |
|---------------------|-------|
| Framework | TensorFlow Lite |
| SHA-1 Hash | 4adc0b716c5af29d957396fab2bcbc460e8b94ee |
| Size (Bytes) | 62958128 |
| Provenance | https://confluence.arm.com/display/MLENG/Yolo+v3 |
| Paper | https://pjreddie.com/media/files/papers/YOLOv3.pdf |


## Accuracy
Dataset: coco-val-2014

| Metric | Value |
|--------|-------|
| mAP50 | 0.563 |

## Network Inputs
| Input Node Name | Shape | Example Path | Example Type | Example Use Case |
|-----------------|-------|--------------|------------------|--------------|
| input_int8 | (1, 416, 416, 3) | models/object_detection/yolo_v3_backbone_mltools/tflite_int8/testing_input/0.npy | int8 | |

## Network Outputs
| Output Node Name | Shape | Description |
|------------------|-------|-------------|
| Identity_int8 | (1, 13, 13, 3, 85) | None |
| Identity_1_int8 | (1, 26, 26, 3, 85) | None |
| Identity_2_int8 | (1, 52, 52, 3, 85) | None |

Binary file not shown.
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
# object_detection/yolo_v3_tiny/tflite_pruned_backbone_only_int8

## Description
YOLO v3 Tiny is the light version of YOLO v3 with less layers for object detection and classification.
This model contains only the backbone, and using Darknet pre-trained weights.

## License
[MIT License](https://github.com/zzh8829/yolov3-tf2/blob/master/LICENSE)

## Network Information
| Network Information | Value |
|---------------------|-------|
| Framework | TensorFlow Lite |
| SHA-1 Hash | ec4c5ad5c92fe6bb7eb750011b0b1e322a15ba19 |
| Size (Bytes) | 8963352 |
| Provenance | https://github.com/zzh8829/yolov3-tf2 + https://pjreddie.com/media/files/yolov3-tiny.weights |
| Paper | https://arxiv.org/pdf/1804.02767.pdf |


## DataSet
| Dataset Information | Value |
|--------|-------|
| Name | Microsoft Coco 2014 |
| Description | COCO is a large-scale object detection, segmentation, and captioning dataset. |
| Link | https://cocodataset.org/#home |


## Accuracy

| Metric | Value |
|--------|-------|
| mAP | 0.345 |

## Network Inputs
| Input Node Name | Shape | Type | Example Path | Example Type | Example Shape | Example Use Case |
|-----------------|-------|-------|--------------|-------|-------|-----------------|
| serving_default_input:0 | (1, 416, 416, 3) | int8 | models/object_detection/yolo_v3_tiny/tflite_pruned_backbone_only_int8/testing_input/serving_default_input:0 | int8 | [1, 416, 416, 3] | Random input for model regression. |

## Network Outputs
| Output Node Name | Shape | Type | Example Path | Example Type | Example Shape | Example Use Case |
|-----------------|-------|-------|--------------|-------|-------|-----------------|
| StatefulPartitionedCall:0 | (1, 13, 13, 3, 85) | int8 | models/object_detection/yolo_v3_tiny/tflite_pruned_backbone_only_int8/testing_output/StatefulPartitionedCall:0 | int8 | [1, 13, 13, 3, 85] | output for model regression. |
| StatefulPartitionedCall:1 | (1, 26, 26, 3, 85) | int8 | models/object_detection/yolo_v3_tiny/tflite_pruned_backbone_only_int8/testing_output/StatefulPartitionedCall:1 | int8 | [1, 26, 26, 3, 85] | output for model regression. |

Binary file not shown.

0 comments on commit 8781b67

Please sign in to comment.