diff --git a/README.md b/README.md index ee682da..c41fa12 100644 --- a/README.md +++ b/README.md @@ -1,26 +1,18 @@ # TensorFlow Lite Flutter Helper Library -Makes use of TensorFlow Lite Interpreter on Flutter easier by -providing simple architecture for processing and manipulating -input and output of TFLite Models. - -API design and documentation is identical to the TensorFlow Lite -Android Support Library. +TFLite Flutter Helper Library brings [TFLite Support Library](https://www.tensorflow.org/lite/inference_with_metadata/lite_support) and [TFLite Support Task Library](https://www.tensorflow.org/lite/inference_with_metadata/task_library/overview) to Flutter and helps users to develop ML and deploy TFLite models onto mobile devices quickly without compromising on performance. ## Getting Started ### Setup TFLite Flutter Plugin -Include `tflite_flutter: ^` in your pubspec.yaml. Follow the initial setup -instructions given [here](https://github.com/am15h/tflite_flutter_plugin#most-important-initial-setup) +Follow the initial setup instructions given [here](https://github.com/am15h/tflite_flutter_plugin#most-important-initial-setup) -## Image Processing +### Basic image manipulation and conversion TFLite Helper depends on [flutter image package](https://pub.dev/packages/image) internally for Image Processing. -### Basic image manipulation and conversion - The TensorFlow Lite Support Library has a suite of basic image manipulation methods such as crop and resize. To use it, create an `ImageProcessor` and add the required operations. To convert the image into the tensor format required by the TensorFlow Lite interpreter, @@ -42,6 +34,22 @@ TensorImage tensorImage = TensorImage.fromFile(imageFile); tensorImage = imageProcessor.process(tensorImage); ``` +Sample app: [Image Classification](https://github.com/am15h/tflite_flutter_helper/tree/master/example/image_classification) + +### Basic audio data processing + +The TensorFlow Lite Support Library also defines a TensorAudio class wrapping some basic audio data processing methods. + +```dart +TensorAudio tensorAudio = TensorAudio.create( + TensorAudioFormat.create(1, sampleRate), size); +tensorAudio.loadShortBytes(audioBytes); + +TensorBuffer inputBuffer = tensorAudio.tensorBuffer; +``` + +Sample app: [Audio Classification](https://github.com/am15h/tflite_flutter_helper/tree/master/example/audio_classification) + ### Create output objects and run the model ```dart @@ -141,8 +149,39 @@ QuantizationParams inputParams = interpreter.getInputTensor(0).params; QuantizationParams outputParams = interpreter.getOutputTensor(0).params; ``` -## Coming Soon +## Task Library + +Currently, Text based models like `NLClassifier`, `BertNLClassifier` and `BertQuestionAnswerer` are available to use with the Flutter Task Library. + +### Integrate Natural Langugae Classifier + +The Task Library's `NLClassifier` API classifies input text into different categories, and is a versatile and configurable API that can handle most text classification models. Detailed guide is available [here](https://www.tensorflow.org/lite/inference_with_metadata/task_library/nl_classifier). + +```dart +final classifier = await NLClassifier.createFromAsset('assets/$_modelFileName', + options: NLClassifierOptions()); +List predictions = classifier.classify(rawText); +``` + +Sample app: [Text Classification](https://github.com/am15h/tflite_flutter_plugin/tree/master/example/lib) using Task Library. + +### Integrate BERT natural language classifier + +The Task Library `BertNLClassifier` API is very similar to the `NLClassifier` that classifies input text into different categories, except that this API is specially tailored for Bert related models that require Wordpiece and Sentencepiece tokenizations outside the TFLite model. Detailed guide is available [here](https://www.tensorflow.org/lite/inference_with_metadata/task_library/bert_nl_classifier). + +```dart +final classifier = await BertNLClassifier.createFromAsset('assets/$_modelFileName', + options: BertNLClassifierOptions()); +List predictions = classifier.classify(rawText); +``` + +### Integrate BERT question answerer + +The Task Library `BertQuestionAnswerer` API loads a Bert model and answers questions based on the content of a given passage. For more information, see the documentation for the Question-Answer model [here](https://www.tensorflow.org/lite/models/bert_qa/overview). Detailed guide is available [here](https://www.tensorflow.org/lite/inference_with_metadata/task_library/bert_question_answerer). + +```dart +final bertQuestionAnswerer = await BertQuestionAnswerer.createFromAsset('assets/$_modelFileName'); +List answeres = bertQuestionAnswerer.answer(context, question); +``` -* More image operations -* Support for text-related applications. -* Support for audio-related applications. \ No newline at end of file +Sample app: [Bert Question Answerer Sample](https://github.com/am15h/tflite_flutter_helper/tree/master/example/bert_question_answer) diff --git a/example/audio_classification/README.md b/example/audio_classification/README.md index c4d68f1..e9416d5 100644 --- a/example/audio_classification/README.md +++ b/example/audio_classification/README.md @@ -1,3 +1,36 @@ -# Audio Classification Flutter App +# Real-time Audio Classification Flutter -Demonstrates the usage of TensorAudio API. +Real-time Audio Classification in flutter. It uses: + +* Interpreter API from TFLite Flutter Plugin. +* TensorAudio API from TFLite Flutter Support Library. +* [YAMNet](https://tfhub.dev/google/lite-model/yamnet/classification/tflite/1), + an audio event classification model. + +

+ animated +

+ +## Build and run + +### Step 1. Clone TFLite Flutter Helper repository + +Clone TFLite Flutter Helper repository to your computer to get the demo +application. + +``` +git clone https://github.com/am15h/tflite_flutter_helper +``` + +### Step 2. Run the application + +``` +cd example/audio_classification/ +flutter run +``` + +## Resources used: + +* [TensorFlow Lite](https://www.tensorflow.org/lite) +* [Audio Classification using TensorFlow Lite](https://www.tensorflow.org/lite/examples/audio_classification/overview) +* [YAMNet audio classification model](https://tfhub.dev/google/lite-model/yamnet/classification/tflite/1) diff --git a/example/audio_classification/audio_demo.gif b/example/audio_classification/audio_demo.gif new file mode 100644 index 0000000..d511ef1 Binary files /dev/null and b/example/audio_classification/audio_demo.gif differ diff --git a/example/audio_classification/pubspec.lock b/example/audio_classification/pubspec.lock index 7690aa5..c3dcb28 100644 --- a/example/audio_classification/pubspec.lock +++ b/example/audio_classification/pubspec.lock @@ -22,6 +22,20 @@ packages: url: "https://pub.dartlang.org" source: hosted version: "2.1.0" + camera: + dependency: transitive + description: + name: camera + url: "https://pub.dartlang.org" + source: hosted + version: "0.8.1+7" + camera_platform_interface: + dependency: transitive + description: + name: camera_platform_interface + url: "https://pub.dartlang.org" + source: hosted + version: "2.1.0" characters: dependency: transitive description: @@ -50,6 +64,13 @@ packages: url: "https://pub.dartlang.org" source: hosted version: "1.15.0" + cross_file: + dependency: transitive + description: + name: cross_file + url: "https://pub.dartlang.org" + source: hosted + version: "0.3.1+4" crypto: dependency: transitive description: @@ -158,6 +179,13 @@ packages: url: "https://pub.dartlang.org" source: hosted version: "2.0.1" + pedantic: + dependency: transitive + description: + name: pedantic + url: "https://pub.dartlang.org" + source: hosted + version: "1.11.1" petitparser: dependency: transitive description: @@ -226,6 +254,13 @@ packages: url: "https://pub.dartlang.org" source: hosted version: "2.1.0" + stream_transform: + dependency: transitive + description: + name: stream_transform + url: "https://pub.dartlang.org" + source: hosted + version: "2.0.0" string_scanner: dependency: transitive description: @@ -305,4 +340,4 @@ packages: version: "5.1.2" sdks: dart: ">=2.13.0 <3.0.0" - flutter: ">=1.26.0-17.6.pre" + flutter: ">=2.0.0" diff --git a/example/bert_question_answer b/example/bert_question_answer new file mode 160000 index 0000000..2c380f2 --- /dev/null +++ b/example/bert_question_answer @@ -0,0 +1 @@ +Subproject commit 2c380f26ddde151e79bed80fccd97ac0e1436df9 diff --git a/example/image_classification/android/app/build.gradle b/example/image_classification/android/app/build.gradle index 6294253..fb6aa1f 100644 --- a/example/image_classification/android/app/build.gradle +++ b/example/image_classification/android/app/build.gradle @@ -34,7 +34,7 @@ android { defaultConfig { // TODO: Specify your own unique Application ID (https://developer.android.com/studio/build/application-id.html). applicationId "com.example.imageclassification" - minSdkVersion 16 + minSdkVersion 21 targetSdkVersion 28 versionCode flutterVersionCode.toInteger() versionName flutterVersionName diff --git a/example/image_classification/lib/classifier.dart b/example/image_classification/lib/classifier.dart index 13782cb..bad50d9 100644 --- a/example/image_classification/lib/classifier.dart +++ b/example/image_classification/lib/classifier.dart @@ -18,7 +18,8 @@ abstract class Classifier { late TensorImage _inputImage; late TensorBuffer _outputBuffer; - TfLiteType _outputType = TfLiteType.uint8; + late TfLiteType _inputType; + late TfLiteType _outputType; final String _labelsFileName = 'assets/labels.txt'; @@ -52,6 +53,7 @@ abstract class Classifier { _inputShape = interpreter.getInputTensor(0).shape; _outputShape = interpreter.getOutputTensor(0).shape; + _inputType = interpreter.getInputTensor(0).type; _outputType = interpreter.getOutputTensor(0).type; _outputBuffer = TensorBuffer.createFixedSize(_outputShape, _outputType); @@ -83,11 +85,9 @@ abstract class Classifier { } Category predict(Image image) { - if (interpreter == null) { - throw StateError('Cannot run inference, Intrepreter is null'); - } final pres = DateTime.now().millisecondsSinceEpoch; - _inputImage = TensorImage.fromImage(image); + _inputImage = TensorImage(_inputType); + _inputImage.loadImage(image); _inputImage = _preProcess(); final pre = DateTime.now().millisecondsSinceEpoch - pres; @@ -108,9 +108,7 @@ abstract class Classifier { } void close() { - if (interpreter != null) { - interpreter.close(); - } + interpreter.close(); } } diff --git a/example/image_classification/pubspec.lock b/example/image_classification/pubspec.lock index b0669ef..b229d1e 100644 --- a/example/image_classification/pubspec.lock +++ b/example/image_classification/pubspec.lock @@ -22,6 +22,20 @@ packages: url: "https://pub.dartlang.org" source: hosted version: "2.1.0" + camera: + dependency: transitive + description: + name: camera + url: "https://pub.dartlang.org" + source: hosted + version: "0.8.1+7" + camera_platform_interface: + dependency: transitive + description: + name: camera_platform_interface + url: "https://pub.dartlang.org" + source: hosted + version: "2.1.0" characters: dependency: transitive description: @@ -50,6 +64,13 @@ packages: url: "https://pub.dartlang.org" source: hosted version: "1.15.0" + cross_file: + dependency: transitive + description: + name: cross_file + url: "https://pub.dartlang.org" + source: hosted + version: "0.3.1+4" crypto: dependency: transitive description: @@ -302,6 +323,13 @@ packages: url: "https://pub.dartlang.org" source: hosted version: "2.1.0" + stream_transform: + dependency: transitive + description: + name: stream_transform + url: "https://pub.dartlang.org" + source: hosted + version: "2.0.0" string_scanner: dependency: transitive description: @@ -331,11 +359,11 @@ packages: source: hosted version: "0.3.0" tflite_flutter: - dependency: "direct main" + dependency: transitive description: - path: "../../../tflite_flutter_plugin" - relative: true - source: path + name: tflite_flutter + url: "https://pub.dartlang.org" + source: hosted version: "0.9.0" tflite_flutter_helper: dependency: "direct main" @@ -402,4 +430,4 @@ packages: version: "5.1.0" sdks: dart: ">=2.12.0 <3.0.0" - flutter: ">=1.26.0-17.6.pre" + flutter: ">=2.0.0" diff --git a/example/image_classification/pubspec.yaml b/example/image_classification/pubspec.yaml index accc9f1..a407d5e 100644 --- a/example/image_classification/pubspec.yaml +++ b/example/image_classification/pubspec.yaml @@ -24,9 +24,6 @@ dependencies: logger: ^1.0.0 path_provider: - tflite_flutter: - path: - ../../../tflite_flutter_plugin tflite_flutter_helper: path: ../../ diff --git a/lib/src/image/base_image_container.dart b/lib/src/image/base_image_container.dart new file mode 100644 index 0000000..56a0255 --- /dev/null +++ b/lib/src/image/base_image_container.dart @@ -0,0 +1,30 @@ +import 'package:camera/camera.dart'; +import 'package:image/image.dart'; +import 'package:tflite_flutter/tflite_flutter.dart'; +import 'package:tflite_flutter_helper/src/image/color_space_type.dart'; +import 'package:tflite_flutter_helper/src/tensorbuffer/tensorbuffer.dart'; + +abstract class BaseImageContainer { + + /// Performs deep copy of the {@link ImageContainer}. */ + BaseImageContainer clone(); + + /// Returns the width of the image. */ + int get width; + + /// Returns the height of the image. */ + int get height; + + /// Gets the {@link Image} representation of the underlying image format. */ + Image get image; + + /// Gets the {@link TensorBuffer} representation with the specific {@code dataType} of the + /// underlying image format. + TensorBuffer getTensorBuffer(TfLiteType dataType); + + /// Gets the {@link Image} representation of the underlying image format. */ + CameraImage get mediaImage; + + /// Returns the color space type of the image. */ + ColorSpaceType get colorSpaceType; +} diff --git a/lib/src/image/camera_image_container.dart b/lib/src/image/camera_image_container.dart new file mode 100644 index 0000000..b05a2fb --- /dev/null +++ b/lib/src/image/camera_image_container.dart @@ -0,0 +1,50 @@ +import 'package:camera/camera.dart'; +import 'package:image/image.dart'; +import 'package:quiver/check.dart'; +import 'package:tflite_flutter/tflite_flutter.dart'; +import 'package:tflite_flutter_helper/src/image/color_space_type.dart'; +import 'package:tflite_flutter_helper/src/image/base_image_container.dart'; +import 'package:tflite_flutter_helper/src/tensorbuffer/tensorbuffer.dart'; + +class CameraImageContainer extends BaseImageContainer { + late final CameraImage cameraImage; + + CameraImageContainer._(CameraImage cameraImage) { + checkArgument(cameraImage.format.group == ImageFormatGroup.yuv420, + message: "Only supports loading YUV_420_888 Image."); + this.cameraImage = cameraImage; + } + + static CameraImageContainer create(CameraImage cameraImage) { + return CameraImageContainer._(cameraImage); + } + + @override + BaseImageContainer clone() { + throw UnsupportedError("CameraImage cannot be cloned"); + } + + @override + ColorSpaceType get colorSpaceType { + return ColorSpaceType.YUV_420_888; + } + + @override + TensorBuffer getTensorBuffer(TfLiteType dataType) { + throw UnsupportedError( + 'Converting CameraImage to TensorBuffer is not supported.'); + } + + @override + int get height => cameraImage.height; + + @override + Image get image => throw UnsupportedError( + 'Converting CameraImage to Image is not supported.'); + + @override + CameraImage get mediaImage => cameraImage; + + @override + int get width => cameraImage.width; +} diff --git a/lib/src/image/color_space_type.dart b/lib/src/image/color_space_type.dart new file mode 100644 index 0000000..8415966 --- /dev/null +++ b/lib/src/image/color_space_type.dart @@ -0,0 +1,287 @@ +import 'package:image/image.dart'; +import 'package:quiver/check.dart'; +import 'package:tflite_flutter_helper/src/image/image_conversions.dart'; +import 'package:tflite_flutter_helper/src/tensorbuffer/tensorbuffer.dart'; + +abstract class ColorSpaceType { + // The first element of the normalizaed shape. + static const int BATCH_DIM = 0; + // The batch axis should always be one. + static const int BATCH_VALUE = 1; + // The second element of the normalizaed shape. + static const int HEIGHT_DIM = 1; + // The third element of the normalizaed shape. + static const int WIDTH_DIM = 2; + // The fourth element of the normalizaed shape. + static const int CHANNEL_DIM = 3; + + static const ColorSpaceType RGB = const _RGB(); + static const ColorSpaceType GRAYSCALE = const _GRAYSCALE(); + static const ColorSpaceType NV12 = const _NV12(); + static const ColorSpaceType NV21 = const _NV21(); + static const ColorSpaceType YV12 = const _YV12(); + static const ColorSpaceType YV21 = const _YV21(); + static const ColorSpaceType YUV_420_888 = const _YUV_420_888(); + + final int value; + const ColorSpaceType(this.value); + + int getValue() { + return value; + } + + /// Verifies if the given shape matches the color space type. + /// + /// @throws ArgumentError if {@code shape} does not match the color space type + /// @throws UnsupportedError if the color space type is not RGB or GRAYSCALE + void assertShape(List shape) { + assertRgbOrGrayScale("assertShape()"); + + List normalizedShape = getNormalizedShape(shape); + checkArgument(isValidNormalizedShape(normalizedShape), + message: getShapeInfoMessage() + "The provided image shape is $shape"); + } + + /// Verifies if the given {@code numElements} in an image buffer matches {@code height} / {@code + /// width} under this color space type. For example, the {@code numElements} of an RGB image of 30 + /// x 20 should be {@code 30 * 20 * 3 = 1800}; the {@code numElements} of a NV21 image of 30 x 20 + /// should be {@code 30 * 20 + ((30 + 1) / 2 * (20 + 1) / 2) * 2 = 952}. + /// + /// @throws ArgumentError if {@code shape} does not match the color space type + void assertNumElements(int numElements, int height, int width) { + checkArgument(numElements >= getNumElements(height, width), + message: + "The given number of elements $numElements does not match the image ${this.toString()} in $height x $width. The" + + " expected number of elements should be at least ${getNumElements(height, width)}."); + } + + /// Converts a {@link TensorBuffer} that represents an image to a Image with the color space type. + /// + /// @throws ArgumentError if the shape of buffer does not match the color space type, + /// @throws UnsupportedError if the color space type is not RGB or GRAYSCALE + Image convertTensorBufferToImage(TensorBuffer buffer) { + throw UnsupportedError( + "convertTensorBufferToImage() is unsupported for the color space type " + + this.toString()); + } + + /// Returns the width of the given shape corresponding to the color space type. + /// + /// @throws ArgumentError if {@code shape} does not match the color space type + /// @throws UnsupportedError if the color space type is not RGB or GRAYSCALE + int getWidth(List shape) { + assertRgbOrGrayScale("getWidth()"); + assertShape(shape); + return getNormalizedShape(shape)[WIDTH_DIM]; + } + + /// Returns the height of the given shape corresponding to the color space type. + /// + /// @throws ArgumentError if {@code shape} does not match the color space type + /// @throws UnsupportedError if the color space type is not RGB or GRAYSCALE + int getHeight(List shape) { + assertRgbOrGrayScale("getHeight()"); + assertShape(shape); + return getNormalizedShape(shape)[HEIGHT_DIM]; + } + + /// Returns the channel value corresponding to the color space type. + /// + /// @throws UnsupportedError if the color space type is not RGB or GRAYSCALE + int getChannelValue() { + throw UnsupportedError( + "getChannelValue() is unsupported for the color space type " + + this.toString()); + } + + /// Gets the normalized shape in the form of (1, h, w, c). Sometimes, a given shape may not have + /// batch or channel axis. + /// + /// @throws UnsupportedError if the color space type is not RGB or GRAYSCALE + List getNormalizedShape(List shape) { + throw UnsupportedError( + "getNormalizedShape() is unsupported for the color space type " + + this.toString()); + } + + /// Returns the shape information corresponding to the color space type. + /// + /// @throws UnsupportedError if the color space type is not RGB or GRAYSCALE + String getShapeInfoMessage() { + throw UnsupportedError( + "getShapeInfoMessage() is unsupported for the color space type " + + this.toString()); + } + + /// Gets the number of elements given the height and width of an image. For example, the number of + /// elements of an RGB image of 30 x 20 is {@code 30 * 20 * 3 = 1800}; the number of elements of a + /// NV21 image of 30 x 20 is {@code 30 * 20 + ((30 + 1) / 2 * (20 + 1) / 2) * 2 = 952}. + int getNumElements(int height, int width); + + static int getYuv420NumElements(int height, int width) { + // Height and width of U/V planes are half of the Y plane. + return height * width + + ((height + 1) / 2).floor() * ((width + 1) / 2).floor() * 2; + } + + /// Inserts a value at the specified position and return the array. */ + static List insertValue(List array, int pos, int value) { + List newArray = List.filled(array.length + 1, 0); + for (int i = 0; i < pos; i++) { + newArray[i] = array[i]; + } + newArray[pos] = value; + for (int i = pos + 1; i < newArray.length; i++) { + newArray[i] = array[i - 1]; + } + return newArray; + } + + bool isValidNormalizedShape(List shape) { + return shape[BATCH_DIM] == BATCH_VALUE && + shape[HEIGHT_DIM] > 0 && + shape[WIDTH_DIM] > 0 && + shape[CHANNEL_DIM] == getChannelValue(); + } + + /// Some existing methods are only valid for RGB and GRAYSCALE images. */ + void assertRgbOrGrayScale(String unsupportedMethodName) { + if (this != ColorSpaceType.RGB && this != ColorSpaceType.GRAYSCALE) { + throw UnsupportedError(unsupportedMethodName + + " only supports RGB and GRAYSCALE formats, but not " + + this.toString()); + } + } +} + +class _RGB extends ColorSpaceType { + const _RGB() : super(0); + + // The channel axis should always be 3 for RGB images. + static const int CHANNEL_VALUE = 3; + + Image convertTensorBufferToImage(TensorBuffer buffer) { + return ImageConversions.convertRgbTensorBufferToImage(buffer); + } + + int getChannelValue() { + return CHANNEL_VALUE; + } + + List getNormalizedShape(List shape) { + switch (shape.length) { + // The shape is in (h, w, c) format. + case 3: + return ColorSpaceType.insertValue( + shape, ColorSpaceType.BATCH_DIM, ColorSpaceType.BATCH_VALUE); + case 4: + return shape; + default: + throw ArgumentError(getShapeInfoMessage() + + "The provided image shape is " + + shape.toString()); + } + } + + int getNumElements(int height, int width) { + return height * width * CHANNEL_VALUE; + } + + String getShapeInfoMessage() { + return "The shape of a RGB image should be (h, w, c) or (1, h, w, c), and channels" + + " representing R, G, B in order. "; + } +} + +/// Each pixel is a single element representing only the amount of light. */ +class _GRAYSCALE extends ColorSpaceType { + // The channel axis should always be 1 for grayscale images. + static const int CHANNEL_VALUE = 1; + + const _GRAYSCALE() : super(1); + + Image convertTensorBufferToImage(TensorBuffer buffer) { + return ImageConversions.convertGrayscaleTensorBufferToImage(buffer); + } + + int getChannelValue() { + return CHANNEL_VALUE; + } + + List getNormalizedShape(List shape) { + switch (shape.length) { + // The shape is in (h, w) format. + case 2: + List shapeWithBatch = ColorSpaceType.insertValue( + shape, ColorSpaceType.BATCH_DIM, ColorSpaceType.BATCH_VALUE); + return ColorSpaceType.insertValue( + shapeWithBatch, ColorSpaceType.CHANNEL_DIM, CHANNEL_VALUE); + case 4: + return shape; + default: + // (1, h, w) and (h, w, 1) are potential grayscale image shapes. However, since they + // both have three dimensions, it will require extra info to differentiate between them. + // Since we haven't encountered real use cases of these two shapes, they are not supported + // at this moment to avoid confusion. We may want to revisit it in the future. + throw new ArgumentError(getShapeInfoMessage() + + "The provided image shape is " + + shape.toString()); + } + } + + int getNumElements(int height, int width) { + return height * width; + } + + String getShapeInfoMessage() { + return "The shape of a grayscale image should be (h, w) or (1, h, w, 1). "; + } +} + +/// YUV420sp format, encoded as "YYYYYYYY UVUV". */ +class _NV12 extends ColorSpaceType { + const _NV12() : super(2); + + int getNumElements(int height, int width) { + return ColorSpaceType.getYuv420NumElements(height, width); + } +} + +/// YUV420sp format, encoded as "YYYYYYYY VUVU", the standard picture format on Android Camera1 +/// preview. +class _NV21 extends ColorSpaceType { + const _NV21() : super(3); + + int getNumElements(int height, int width) { + return ColorSpaceType.getYuv420NumElements(height, width); + } +} + +/// YUV420p format, encoded as "YYYYYYYY VV UU". */ +class _YV12 extends ColorSpaceType { + const _YV12() : super(4); + + int getNumElements(int height, int width) { + return ColorSpaceType.getYuv420NumElements(height, width); + } +} + +/// YUV420p format, encoded as "YYYYYYYY UU VV". */ +class _YV21 extends ColorSpaceType { + const _YV21() : super(5); + int getNumElements(int height, int width) { + return ColorSpaceType.getYuv420NumElements(height, width); + } +} + +/// YUV420 format corresponding to {@link android.graphics.ImageFormat#YUV_420_888}. The actual +/// encoding format (i.e. NV12 / Nv21 / YV12 / YV21) depends on the implementation of the image. +/// +///

Use this format only when you load an {@link android.media.Image}. +class _YUV_420_888 extends ColorSpaceType { + const _YUV_420_888() : super(6); + + int getNumElements(int height, int width) { + return ColorSpaceType.getYuv420NumElements(height, width); + } +} diff --git a/lib/src/image/image_container.dart b/lib/src/image/image_container.dart new file mode 100644 index 0000000..f35d375 --- /dev/null +++ b/lib/src/image/image_container.dart @@ -0,0 +1,61 @@ +import 'package:camera/camera.dart'; +import 'package:image/image.dart'; +import 'package:tflite_flutter/tflite_flutter.dart'; +import 'package:tflite_flutter_helper/src/image/base_image_container.dart'; +import 'package:tflite_flutter_helper/src/image/image_conversions.dart'; +import 'package:tflite_flutter_helper/src/tensorbuffer/tensorbuffer.dart'; +import 'package:tflite_flutter_helper/src/image/color_space_type.dart'; + +class ImageContainer extends BaseImageContainer { + late final Image _image; + + ImageContainer._(Image image) { + this._image = image; + } + + static ImageContainer create(Image image) { + return ImageContainer._(image); + } + + @override + BaseImageContainer clone() { + return create(_image.clone()); + } + + @override + ColorSpaceType get colorSpaceType { + int len = _image.data.length; + bool isGrayscale = true; + for (int i = (len / 4).floor(); i < _image.data.length; i++) { + if (_image.data[i] != 0) { + isGrayscale = false; + break; + } + } + if (isGrayscale) { + return ColorSpaceType.GRAYSCALE; + } else { + return ColorSpaceType.RGB; + } + } + + @override + TensorBuffer getTensorBuffer(TfLiteType dataType) { + TensorBuffer buffer = TensorBuffer.createDynamic(dataType); + ImageConversions.convertImageToTensorBuffer(image, buffer); + return buffer; + } + + @override + int get height => _image.height; + + @override + Image get image => _image; + + @override + CameraImage get mediaImage => throw UnsupportedError( + 'Converting from Image to CameraImage is unsupported'); + + @override + int get width => _image.width; +} diff --git a/lib/src/image/image_conversions.dart b/lib/src/image/image_conversions.dart index 1072dcc..0d911b7 100644 --- a/lib/src/image/image_conversions.dart +++ b/lib/src/image/image_conversions.dart @@ -1,33 +1,22 @@ import 'package:image/image.dart'; import 'package:tflite_flutter/tflite_flutter.dart'; -import 'package:tflite_flutter_helper/src/image/tensor_image.dart'; +import 'package:tflite_flutter_helper/src/image/color_space_type.dart'; import 'package:tflite_flutter_helper/src/tensorbuffer/tensorbuffer.dart'; /// Implements some stateless image conversion methods. /// /// This class is an internal helper. -class ImageConversion { - static Image convertTensorBufferToImage(TensorBuffer buffer, Image image) { - if (buffer.getDataType() != TfLiteType.uint8) { - throw UnsupportedError( - "Converting TensorBuffer of type ${buffer.getDataType()} to Image is not supported yet.", - ); - } +class ImageConversions { + static Image convertRgbTensorBufferToImage(TensorBuffer buffer) { List shape = buffer.getShape(); - TensorImage.checkImageTensorShape(shape); - int h = shape[shape.length - 3]; - int w = shape[shape.length - 2]; - if (image.width != w || image.height != h) { - throw ArgumentError( - "Given image has different width or height ${[ - image.width, - image.height - ]} with the expected ones ${[w, h]}.", - ); - } + ColorSpaceType rgb = ColorSpaceType.RGB; + rgb.assertShape(shape); - List rgbValues = buffer.getIntList(); + int h = rgb.getHeight(shape); + int w = rgb.getWidth(shape); + Image image = Image(w, h); + List rgbValues = buffer.getIntList(); assert(rgbValues.length == w * h * 3); for (int i = 0, j = 0, wi = 0, hi = 0; j < rgbValues.length; i++) { @@ -45,19 +34,51 @@ class ImageConversion { return image; } + static Image convertGrayscaleTensorBufferToImage(TensorBuffer buffer) { + // Convert buffer into Uint8 as needed. + TensorBuffer uint8Buffer = buffer.getDataType() == TfLiteType.uint8 + ? buffer + : TensorBuffer.createFrom(buffer, TfLiteType.uint8); + + final shape = uint8Buffer.getShape(); + final grayscale = ColorSpaceType.GRAYSCALE; + grayscale.assertShape(shape); + + final image = Image.fromBytes(grayscale.getWidth(shape), + grayscale.getHeight(shape), uint8Buffer.getBuffer().asUint8List(), + format: Format.luminance); + + return image; + } + static void convertImageToTensorBuffer(Image image, TensorBuffer buffer) { int w = image.width; int h = image.height; List intValues = image.data; - + int flatSize = w * h * 3; List shape = [h, w, 3]; - List rgbValues = List.filled(h * w * 3, 0); - for (int i = 0, j = 0; i < intValues.length; i++) { - rgbValues[j++] = ((intValues[i]) & 0xFF); - rgbValues[j++] = ((intValues[i] >> 8) & 0xFF); - rgbValues[j++] = ((intValues[i] >> 16) & 0xFF); + switch (buffer.getDataType()) { + case TfLiteType.uint8: + List byteArr = List.filled(flatSize, 0); + for (int i = 0, j = 0; i < intValues.length; i++) { + byteArr[j++] = ((intValues[i]) & 0xFF); + byteArr[j++] = ((intValues[i] >> 8) & 0xFF); + byteArr[j++] = ((intValues[i] >> 16) & 0xFF); + } + buffer.loadList(byteArr, shape: shape); + break; + case TfLiteType.float32: + List floatArr = List.filled(flatSize, 0.0); + for (int i = 0, j = 0; i < intValues.length; i++) { + floatArr[j++] = ((intValues[i]) & 0xFF).toDouble(); + floatArr[j++] = ((intValues[i] >> 8) & 0xFF).toDouble(); + floatArr[j++] = ((intValues[i] >> 16) & 0xFF).toDouble(); + } + buffer.loadList(floatArr, shape: shape); + break; + default: + throw StateError( + "${buffer.getDataType()} is unsupported with TensorBuffer."); } - - buffer.loadList(rgbValues, shape: shape); } } diff --git a/lib/src/image/ops/transform_to_grayscale_op.dart b/lib/src/image/ops/transform_to_grayscale_op.dart new file mode 100644 index 0000000..32dab2e --- /dev/null +++ b/lib/src/image/ops/transform_to_grayscale_op.dart @@ -0,0 +1,29 @@ +import 'dart:math'; +import 'package:image/image.dart' as imageLib; +import 'package:tflite_flutter_helper/src/image/image_operator.dart'; +import 'package:tflite_flutter_helper/src/image/tensor_image.dart'; + +class TransformToGrayscaleOp extends ImageOperator { + @override + TensorImage apply(TensorImage image) { + final transformedImage = imageLib.grayscale(image.image); + image.loadImage(transformedImage); + return image; + } + + @override + int getOutputImageHeight(int inputImageHeight, int inputImageWidth) { + return inputImageHeight; + } + + @override + int getOutputImageWidth(int inputImageHeight, int inputImageWidth) { + return inputImageWidth; + } + + @override + Point inverseTransform( + Point point, int inputImageHeight, int inputImageWidth) { + return point; + } +} diff --git a/lib/src/image/tensor_buffer_container.dart b/lib/src/image/tensor_buffer_container.dart new file mode 100644 index 0000000..93be2c0 --- /dev/null +++ b/lib/src/image/tensor_buffer_container.dart @@ -0,0 +1,108 @@ +import 'package:camera/camera.dart'; +import 'package:image/image.dart'; +import 'package:quiver/check.dart'; +import 'package:tflite_flutter/tflite_flutter.dart'; +import 'package:tflite_flutter_helper/src/image/color_space_type.dart'; +import 'package:tflite_flutter_helper/src/image/base_image_container.dart'; +import 'package:tflite_flutter_helper/src/tensorbuffer/tensorbuffer.dart'; + +class TensorBufferContainer implements BaseImageContainer { + late final TensorBuffer _buffer; + late final ColorSpaceType _colorSpaceType; + late final int _height; + late final int _width; + + /// Creates a {@link TensorBufferContainer} object with the specified {@link + /// TensorImage#ColorSpaceType}. + /// + ///

Only supports {@link ColorSapceType#RGB} and {@link ColorSpaceType#GRAYSCALE}. Use {@link + /// #create(TensorBuffer, ImageProperties)} for other color space types. + /// + /// @throws IllegalArgumentException if the shape of the {@link TensorBuffer} does not match the + /// specified color space type, or if the color space type is not supported + static TensorBufferContainer create(TensorBuffer buffer, ColorSpaceType colorSpaceType) { + checkArgument( + colorSpaceType == ColorSpaceType.RGB || colorSpaceType == ColorSpaceType.GRAYSCALE, + message: "Only ColorSpaceType.RGB and ColorSpaceType.GRAYSCALE are supported. Use" + + " `create(TensorBuffer, ImageProperties)` for other color space types."); + + return TensorBufferContainer._( + buffer, + colorSpaceType, + colorSpaceType.getHeight(buffer.getShape()), + colorSpaceType.getWidth(buffer.getShape())); + } + + TensorBufferContainer._( + TensorBuffer buffer, ColorSpaceType colorSpaceType, int height, int width) { + checkArgument( + colorSpaceType != ColorSpaceType.YUV_420_888, + message: "The actual encoding format of YUV420 is required. Choose a ColorSpaceType from: NV12," + + " NV21, YV12, YV21. Use YUV_420_888 only when loading an android.media.Image."); + + colorSpaceType.assertNumElements(buffer.getFlatSize(), height, width); + this._buffer = buffer; + this._colorSpaceType = colorSpaceType; + this._height = height; + this._width = width; + } + + @override + TensorBufferContainer clone() { + return TensorBufferContainer._( + TensorBuffer.createFrom(_buffer, _buffer.getDataType()), + colorSpaceType, + height, + width); + } + + @override + Image get image { + if (_buffer.getDataType() != TfLiteType.uint8) { + // Print warning instead of throwing an exception. When using float models, users may want to + // convert the resulting float image into Bitmap. That's fine to do so, as long as they are + // aware of the potential accuracy lost when casting to uint8. + // Log.w( + // TAG, + // " TensorBufferContainer is holding a non-uint8 image. The conversion to Bitmap" + // + " will cause numeric casting and clamping on the data value."); + } + + return colorSpaceType.convertTensorBufferToImage(_buffer); + } + + @override + TensorBuffer getTensorBuffer(TfLiteType dataType) { + // If the data type of buffer is desired, return it directly. Not making a defensive copy for + // performance considerations. During image processing, users may need to set and get the + // TensorBuffer many times. + // Otherwise, create another one with the expected data type. + return _buffer.getDataType() == dataType ? _buffer : TensorBuffer.createFrom(_buffer, dataType); + } + + @override + CameraImage get mediaImage { + throw UnsupportedError( + "Converting from TensorBuffer to android.media.Image is unsupported."); + } + + @override + int get width { + // In case the underlying buffer in Tensorbuffer gets updated after TensorImage is created. + _colorSpaceType.assertNumElements(_buffer.getFlatSize(), _height, _width); + return _width; + } + + @override + int get height { + // In case the underlying buffer in Tensorbuffer gets updated after TensorImage is created. + _colorSpaceType.assertNumElements(_buffer.getFlatSize(), _height, _width); + return _height; + } + + @override + ColorSpaceType get colorSpaceType { + return _colorSpaceType; + } + +} diff --git a/lib/src/image/tensor_image.dart b/lib/src/image/tensor_image.dart index 8d025a1..eb02102 100644 --- a/lib/src/image/tensor_image.dart +++ b/lib/src/image/tensor_image.dart @@ -2,9 +2,12 @@ import 'dart:io'; import 'dart:typed_data'; import 'package:image/image.dart'; +import 'package:quiver/check.dart'; import 'package:tflite_flutter/tflite_flutter.dart'; -import 'package:tflite_flutter_helper/src/common/support_preconditions.dart'; -import 'package:tflite_flutter_helper/src/image/image_conversions.dart'; +import 'package:tflite_flutter_helper/src/image/base_image_container.dart'; +import 'package:tflite_flutter_helper/src/image/color_space_type.dart'; +import 'package:tflite_flutter_helper/src/image/image_container.dart'; +import 'package:tflite_flutter_helper/src/image/tensor_buffer_container.dart'; import 'package:tflite_flutter_helper/src/tensorbuffer/tensorbuffer.dart'; /// [TensorImage] is the wrapper class for [Image] object. When using image processing utils in @@ -21,17 +24,18 @@ import 'package:tflite_flutter_helper/src/tensorbuffer/tensorbuffer.dart'; /// convert one to the other when needed. /// /// IMPORTANT: The container doesn't own its data. Callers should not modify data objects those -/// are passed to [_ImageContainer.bufferImage] or [_ImageContainer.tensorBuffer]. +/// are passed to [BaseImageContainer.bufferImage] or [BaseImageContainer.tensorBuffer]. /// /// See [ImageProcessor] which is often used for transforming a [TensorImage]. class TensorImage { - _ImageContainer _container; + BaseImageContainer? _container; + final TfLiteType _tfLiteType; /// Initialize a [TensorImage] object. /// /// Note: For Image with float value pixels use [TensorImage(TfLiteType.float)] TensorImage([TfLiteType dataType = TfLiteType.uint8]) - : _container = _ImageContainer(dataType); + : _tfLiteType = dataType; /// Initialize [TensorImage] from [Image] /// @@ -61,9 +65,7 @@ class TensorImage { /// Load [Image] to this [TensorImage] void loadImage(Image image) { - SupportPreconditions.checkNotNull(image, - message: "Cannot load null image."); - _container.image = image; + _container = ImageContainer.create(image); } /// Load a list of RGB pixels into this [TensorImage] @@ -72,7 +74,6 @@ class TensorImage { /// and [shape] is not in form (height, width ,channels) or /// (1, height, width, channels) void loadRgbPixels(List pixels, List shape) { - checkImageTensorShape(shape); TensorBuffer buffer = TensorBuffer.createDynamic(dataType); buffer.loadList(pixels, shape: shape); loadTensorBuffer(buffer); @@ -83,33 +84,40 @@ class TensorImage { /// Throws [ArgumentError] if [TensorBuffer.shape] is not in form (height, width ,channels) or /// (1, height, width, channels) void loadTensorBuffer(TensorBuffer buffer) { - checkImageTensorShape(buffer.getShape()); - _container.bufferImage = buffer; + load(buffer, ColorSpaceType.RGB); } - /// Requires tensor shape [h, w, 3] or [1, h, w, 3]. - static void checkImageTensorShape(List shape) { - SupportPreconditions.checkArgument( - (shape.length == 3 || (shape.length == 4 && shape[0] == 1)) && - shape[shape.length - 3] > 0 && - shape[shape.length - 2] > 0 && - shape[shape.length - 1] == 3, - errorMessage: - "Only supports image shape in (h, w, c) or (1, h, w, c), and channels representing R, G, B" + - " in order."); + void load(TensorBuffer buffer, ColorSpaceType colorSpaceType) { + checkArgument( + colorSpaceType == ColorSpaceType.RGB || + colorSpaceType == ColorSpaceType.GRAYSCALE, + message: + "Only ColorSpaceType.RGB and ColorSpaceType.GRAYSCALE are supported. Use" + + " `load(TensorBuffer, ImageProperties)` for other color space types."); + _container = TensorBufferContainer.create(buffer, colorSpaceType); } /// Gets the image width. /// /// Throws [StateError] if the TensorImage never loads data. /// and [ArgumentError] if the container data is corrupted. - int get width => _container.width; + int get width { + if (_container == null) { + throw new StateError("No image has been loaded yet."); + } + return _container!.width; + } /// Gets the image height. /// /// Throws [StateError] if the TensorImage never loads data. /// and [ArgumentError] if the container data is corrupted. - int get height => _container.height; + int get height { + if (_container == null) { + throw new StateError("No image has been loaded yet."); + } + return _container!.height; + } /// Gets the image height. /// @@ -126,7 +134,9 @@ class TensorImage { /// Gets the current data type. /// /// Currently only UINT8 and FLOAT32 are possible. - TfLiteType get dataType => _container.tfLiteType!; + TfLiteType get dataType { + return _tfLiteType; + } /// Gets the current data type. /// @@ -136,7 +146,9 @@ class TensorImage { /// Gets the current data type. /// /// Currently only UINT8 and FLOAT32 are possible. - TfLiteType get tfLiteType => _container.tfLiteType!; + TfLiteType get tfLiteType { + return _tfLiteType; + } /// Returns the underlying [Image] representation of this [TensorImage]. /// @@ -144,7 +156,12 @@ class TensorImage { /// concern, but if modification is necessary, please make a copy. /// /// Throws [StateError] if the TensorImage never loads data. - Image get image => _container.image; + Image get image { + if (_container == null) { + throw new StateError("No image has been loaded yet."); + } + return _container!.image; + } /// Returns a [ByteBuffer] representation of this [TensorImage]. /// @@ -154,7 +171,9 @@ class TensorImage { /// It's essentially a short cut for [getTensorBuffer.getBuffer()]. /// /// Throws [StateError] if the TensorImage never loads data. - ByteBuffer get buffer => _container.tensorBuffer.getBuffer(); + ByteBuffer get buffer { + return tensorBuffer.buffer; + } /// Returns a [ByteBuffer] representation of this [TensorImage]. /// @@ -172,7 +191,12 @@ class TensorImage { /// concern, but if modification is necessary, please make a copy. /// /// Throws [ArgumentError] if this TensorImage never loads data. - TensorBuffer get tensorBuffer => _container.tensorBuffer; + TensorBuffer get tensorBuffer { + if (_container == null) { + throw new StateError("No image has been loaded yet."); + } + return _container!.getTensorBuffer(_tfLiteType); + } /// Returns the underlying [TensorBuffer] representation for this [TensorImage] /// @@ -182,106 +206,3 @@ class TensorImage { /// Throws [ArgumentError] if this TensorImage never loads data. TensorBuffer getTensorBuffer() => tensorBuffer; } - -// Handles RGB image data storage strategy of TensorBuffer. -class _ImageContainer { - TensorBuffer? _bufferImage; - Image? _image; - - late bool _isBufferUpdated; - late bool _isImageUpdated; - final TfLiteType? tfLiteType; - - static final int? argbElementBytes = 4; - - _ImageContainer(this.tfLiteType); - - Image get image { - if (_isImageUpdated) return _image!; - if (!_isBufferUpdated) - throw StateError( - "Both buffer and bitmap data are obsolete. Forgot to call TensorImage.loadImage?"); - if (_bufferImage!.getDataType() != TfLiteType.uint8) { - throw StateError( - "TensorImage is holding a float-value image which is not able to convert a Image."); - } - num reqAllocation = _bufferImage!.getFlatSize() * argbElementBytes!; - if (_image == null || _image!.getBytes().length < reqAllocation) { - List shape = _bufferImage!.getShape(); - int h = shape[shape.length - 3]; - int w = shape[shape.length - 2]; - _image = Image(w, h); - } - - _image = ImageConversion.convertTensorBufferToImage(_bufferImage!, _image!); - _isImageUpdated = true; - return _image!; - } - - // Internal method to set the image source-of-truth with a image. - set image(Image value) { - _image = value; - _isBufferUpdated = false; - _isImageUpdated = true; - } - - TensorBuffer get tensorBuffer { - if (_isBufferUpdated) { - return _bufferImage!; - } - SupportPreconditions.checkArgument( - _isImageUpdated, - errorMessage: - "Both buffer and bitmap data are obsolete. Forgot to call TensorImage#load?", - ); - int requiredFlatSize = image.width * image.height * 3; - if (_bufferImage == null || - (!_bufferImage!.isDynamic && - _bufferImage!.getFlatSize() != requiredFlatSize)) { - _bufferImage = TensorBuffer.createDynamic(tfLiteType!); - } - - ImageConversion.convertImageToTensorBuffer(_image!, _bufferImage!); - _isBufferUpdated = true; - return _bufferImage!; - } - - // Internal method to set the image source-of-truth with a TensorBuffer. - set bufferImage(TensorBuffer value) { - _bufferImage = value; - _isImageUpdated = false; - _isBufferUpdated = true; - } - - int get width { - SupportPreconditions.checkState(_isBufferUpdated || _isImageUpdated, - errorMessage: - "Both buffer and bitmap data are obsolete. Forgot to call TensorImage#load?"); - if (_isImageUpdated) { - return image.width; - } - return _getBufferDimensionSize(-2); - } - - int get height { - SupportPreconditions.checkState(_isBufferUpdated || _isImageUpdated, - errorMessage: - "Both buffer and bitmap data are obsolete. Forgot to call TensorImage#load?"); - if (_isImageUpdated) { - return image.height; - } - return _getBufferDimensionSize(-3); - } - - int _getBufferDimensionSize(int dim) { - List shape = _bufferImage!.getShape(); - // The defensive check is needed because bufferImage might be invalidly changed by user - // (a.k.a internal data is corrupted) - TensorImage.checkImageTensorShape(shape); - dim = dim % shape.length; - if (dim < 0) { - dim += shape.length; - } - return shape[dim]; - } -} diff --git a/pubspec.lock b/pubspec.lock index 0a625b6..def05b8 100644 --- a/pubspec.lock +++ b/pubspec.lock @@ -22,6 +22,20 @@ packages: url: "https://pub.dartlang.org" source: hosted version: "2.1.0" + camera: + dependency: "direct main" + description: + name: camera + url: "https://pub.dartlang.org" + source: hosted + version: "0.8.1+7" + camera_platform_interface: + dependency: transitive + description: + name: camera_platform_interface + url: "https://pub.dartlang.org" + source: hosted + version: "2.1.0" characters: dependency: transitive description: @@ -50,6 +64,13 @@ packages: url: "https://pub.dartlang.org" source: hosted version: "1.15.0" + cross_file: + dependency: transitive + description: + name: cross_file + url: "https://pub.dartlang.org" + source: hosted + version: "0.3.1+4" crypto: dependency: transitive description: @@ -70,14 +91,14 @@ packages: name: ffi url: "https://pub.dartlang.org" source: hosted - version: "1.0.0" + version: "1.1.2" file: dependency: transitive description: name: file url: "https://pub.dartlang.org" source: hosted - version: "6.1.0" + version: "6.1.2" flutter: dependency: "direct main" description: flutter @@ -122,21 +143,21 @@ packages: name: path_provider url: "https://pub.dartlang.org" source: hosted - version: "2.0.1" + version: "2.0.2" path_provider_linux: dependency: transitive description: name: path_provider_linux url: "https://pub.dartlang.org" source: hosted - version: "2.0.0" + version: "2.0.2" path_provider_macos: dependency: transitive description: name: path_provider_macos url: "https://pub.dartlang.org" source: hosted - version: "2.0.0" + version: "2.0.2" path_provider_platform_interface: dependency: transitive description: @@ -150,7 +171,14 @@ packages: name: path_provider_windows url: "https://pub.dartlang.org" source: hosted - version: "2.0.0" + version: "2.0.3" + pedantic: + dependency: transitive + description: + name: pedantic + url: "https://pub.dartlang.org" + source: hosted + version: "1.11.1" petitparser: dependency: transitive description: @@ -164,21 +192,21 @@ packages: name: platform url: "https://pub.dartlang.org" source: hosted - version: "3.0.0" + version: "3.0.2" plugin_platform_interface: dependency: transitive description: name: plugin_platform_interface url: "https://pub.dartlang.org" source: hosted - version: "2.0.0" + version: "2.0.1" process: dependency: transitive description: name: process url: "https://pub.dartlang.org" source: hosted - version: "4.2.1" + version: "4.2.3" quiver: dependency: "direct main" description: @@ -212,6 +240,13 @@ packages: url: "https://pub.dartlang.org" source: hosted version: "2.1.0" + stream_transform: + dependency: transitive + description: + name: stream_transform + url: "https://pub.dartlang.org" + source: hosted + version: "2.0.0" string_scanner: dependency: transitive description: @@ -236,9 +271,9 @@ packages: tflite_flutter: dependency: "direct main" description: - path: "../tflite_flutter_plugin" - relative: true - source: path + name: tflite_flutter + url: "https://pub.dartlang.org" + source: hosted version: "0.9.0" tuple: dependency: "direct main" @@ -267,7 +302,7 @@ packages: name: win32 url: "https://pub.dartlang.org" source: hosted - version: "2.0.5" + version: "2.2.5" xdg_directories: dependency: transitive description: @@ -281,7 +316,7 @@ packages: name: xml url: "https://pub.dartlang.org" source: hosted - version: "5.1.0" + version: "5.1.2" sdks: - dart: ">=2.12.0 <3.0.0" - flutter: ">=1.26.0-17.6.pre" + dart: ">=2.13.0 <3.0.0" + flutter: ">=2.0.0" diff --git a/pubspec.yaml b/pubspec.yaml index 90e9bb7..b5270b4 100644 --- a/pubspec.yaml +++ b/pubspec.yaml @@ -12,12 +12,11 @@ dependencies: meta: ^1.1.8 quiver: ^3.0.1 path_provider: ^2.0.1 - tflite_flutter: - path: ../tflite_flutter_plugin - image: ^3.0.2 + tflite_flutter: ^0.9.0 tuple: ^2.0.0 + camera: ^0.8.1+3 ffi: ^1.0.0 - + image: ^3.0.2 dev_dependencies: flutter_test: sdk: flutter